00:00:00.001 Started by upstream project "autotest-per-patch" build number 127156 00:00:00.001 originally caused by: 00:00:00.001 Started by user sys_sgci 00:00:00.131 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-phy.groovy 00:00:00.132 The recommended git tool is: git 00:00:00.132 using credential 00000000-0000-0000-0000-000000000002 00:00:00.134 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvmf-tcp-phy-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.168 Fetching changes from the remote Git repository 00:00:00.172 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.205 Using shallow fetch with depth 1 00:00:00.205 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.205 > git --version # timeout=10 00:00:00.233 > git --version # 'git version 2.39.2' 00:00:00.233 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.248 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.248 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:06.442 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:06.454 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:06.465 Checking out Revision bd3e126a67c072de18fcd072f7502b1f7801d6ff (FETCH_HEAD) 00:00:06.465 > git config core.sparsecheckout # timeout=10 00:00:06.474 > git read-tree -mu HEAD # timeout=10 00:00:06.494 > git checkout -f bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=5 00:00:06.513 Commit message: "jenkins/autotest: add raid-vg subjob to autotest configs" 00:00:06.513 > git rev-list --no-walk bd3e126a67c072de18fcd072f7502b1f7801d6ff # timeout=10 00:00:06.604 [Pipeline] Start of Pipeline 00:00:06.621 [Pipeline] library 00:00:06.622 Loading library shm_lib@master 00:00:06.623 Library shm_lib@master is cached. Copying from home. 00:00:06.635 [Pipeline] node 00:00:06.649 Running on WFP8 in /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:00:06.651 [Pipeline] { 00:00:06.659 [Pipeline] catchError 00:00:06.661 [Pipeline] { 00:00:06.673 [Pipeline] wrap 00:00:06.680 [Pipeline] { 00:00:06.690 [Pipeline] stage 00:00:06.692 [Pipeline] { (Prologue) 00:00:06.870 [Pipeline] sh 00:00:07.158 + logger -p user.info -t JENKINS-CI 00:00:07.176 [Pipeline] echo 00:00:07.177 Node: WFP8 00:00:07.182 [Pipeline] sh 00:00:07.480 [Pipeline] setCustomBuildProperty 00:00:07.489 [Pipeline] echo 00:00:07.491 Cleanup processes 00:00:07.494 [Pipeline] sh 00:00:07.777 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.777 36945 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:07.789 [Pipeline] sh 00:00:08.074 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:00:08.074 ++ grep -v 'sudo pgrep' 00:00:08.074 ++ awk '{print $1}' 00:00:08.074 + sudo kill -9 00:00:08.074 + true 00:00:08.090 [Pipeline] cleanWs 00:00:08.100 [WS-CLEANUP] Deleting project workspace... 00:00:08.100 [WS-CLEANUP] Deferred wipeout is used... 00:00:08.107 [WS-CLEANUP] done 00:00:08.111 [Pipeline] setCustomBuildProperty 00:00:08.126 [Pipeline] sh 00:00:08.408 + sudo git config --global --replace-all safe.directory '*' 00:00:08.497 [Pipeline] httpRequest 00:00:08.554 [Pipeline] echo 00:00:08.556 Sorcerer 10.211.164.101 is alive 00:00:08.565 [Pipeline] httpRequest 00:00:08.571 HttpMethod: GET 00:00:08.572 URL: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:08.572 Sending request to url: http://10.211.164.101/packages/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:08.588 Response Code: HTTP/1.1 200 OK 00:00:08.589 Success: Status code 200 is in the accepted range: 200,404 00:00:08.589 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:10.224 [Pipeline] sh 00:00:10.510 + tar --no-same-owner -xf jbp_bd3e126a67c072de18fcd072f7502b1f7801d6ff.tar.gz 00:00:10.523 [Pipeline] httpRequest 00:00:10.554 [Pipeline] echo 00:00:10.555 Sorcerer 10.211.164.101 is alive 00:00:10.563 [Pipeline] httpRequest 00:00:10.568 HttpMethod: GET 00:00:10.569 URL: http://10.211.164.101/packages/spdk_58883cba9088e9e7c34049065c929fe202e3a295.tar.gz 00:00:10.569 Sending request to url: http://10.211.164.101/packages/spdk_58883cba9088e9e7c34049065c929fe202e3a295.tar.gz 00:00:10.586 Response Code: HTTP/1.1 200 OK 00:00:10.586 Success: Status code 200 is in the accepted range: 200,404 00:00:10.587 Saving response body to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk_58883cba9088e9e7c34049065c929fe202e3a295.tar.gz 00:01:15.364 [Pipeline] sh 00:01:15.646 + tar --no-same-owner -xf spdk_58883cba9088e9e7c34049065c929fe202e3a295.tar.gz 00:01:18.197 [Pipeline] sh 00:01:18.481 + git -C spdk log --oneline -n5 00:01:18.481 58883cba9 bdev/compress: release reduce vol resource when comp bdev fails to be created. 00:01:18.481 b8378f94e scripts/pkgdep: Set yum's skip_if_unavailable=True under rocky8 00:01:18.481 c2a77f51e module/bdev/nvme: add detach-monitor poller 00:01:18.481 e14876e17 lib/nvme: add spdk_nvme_scan_attached() 00:01:18.481 1d6dfcbeb nvme_pci: ctrlr_scan_attached callback 00:01:18.494 [Pipeline] } 00:01:18.511 [Pipeline] // stage 00:01:18.519 [Pipeline] stage 00:01:18.521 [Pipeline] { (Prepare) 00:01:18.538 [Pipeline] writeFile 00:01:18.552 [Pipeline] sh 00:01:18.835 + logger -p user.info -t JENKINS-CI 00:01:18.847 [Pipeline] sh 00:01:19.132 + logger -p user.info -t JENKINS-CI 00:01:19.144 [Pipeline] sh 00:01:19.432 + cat autorun-spdk.conf 00:01:19.432 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.432 SPDK_TEST_NVMF=1 00:01:19.432 SPDK_TEST_NVME_CLI=1 00:01:19.432 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.432 SPDK_TEST_NVMF_NICS=e810 00:01:19.432 SPDK_TEST_VFIOUSER=1 00:01:19.432 SPDK_RUN_UBSAN=1 00:01:19.432 NET_TYPE=phy 00:01:19.441 RUN_NIGHTLY=0 00:01:19.445 [Pipeline] readFile 00:01:19.469 [Pipeline] withEnv 00:01:19.471 [Pipeline] { 00:01:19.485 [Pipeline] sh 00:01:19.770 + set -ex 00:01:19.770 + [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf ]] 00:01:19.770 + source /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:19.770 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:19.770 ++ SPDK_TEST_NVMF=1 00:01:19.770 ++ SPDK_TEST_NVME_CLI=1 00:01:19.770 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:19.770 ++ SPDK_TEST_NVMF_NICS=e810 00:01:19.770 ++ SPDK_TEST_VFIOUSER=1 00:01:19.770 ++ SPDK_RUN_UBSAN=1 00:01:19.770 ++ NET_TYPE=phy 00:01:19.770 ++ RUN_NIGHTLY=0 00:01:19.770 + case $SPDK_TEST_NVMF_NICS in 00:01:19.770 + DRIVERS=ice 00:01:19.770 + [[ tcp == \r\d\m\a ]] 00:01:19.770 + [[ -n ice ]] 00:01:19.770 + sudo rmmod mlx4_ib mlx5_ib irdma i40iw iw_cxgb4 00:01:19.770 rmmod: ERROR: Module mlx4_ib is not currently loaded 00:01:19.770 rmmod: ERROR: Module mlx5_ib is not currently loaded 00:01:19.770 rmmod: ERROR: Module irdma is not currently loaded 00:01:19.770 rmmod: ERROR: Module i40iw is not currently loaded 00:01:19.770 rmmod: ERROR: Module iw_cxgb4 is not currently loaded 00:01:19.770 + true 00:01:19.770 + for D in $DRIVERS 00:01:19.770 + sudo modprobe ice 00:01:19.770 + exit 0 00:01:19.779 [Pipeline] } 00:01:19.795 [Pipeline] // withEnv 00:01:19.800 [Pipeline] } 00:01:19.817 [Pipeline] // stage 00:01:19.826 [Pipeline] catchError 00:01:19.827 [Pipeline] { 00:01:19.838 [Pipeline] timeout 00:01:19.838 Timeout set to expire in 50 min 00:01:19.840 [Pipeline] { 00:01:19.854 [Pipeline] stage 00:01:19.856 [Pipeline] { (Tests) 00:01:19.870 [Pipeline] sh 00:01:20.186 + jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.186 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.186 + DIR_ROOT=/var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.186 + [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest ]] 00:01:20.186 + DIR_SPDK=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:20.186 + DIR_OUTPUT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.186 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk ]] 00:01:20.186 + [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.186 + mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/output 00:01:20.186 + [[ -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/output ]] 00:01:20.186 + [[ nvmf-tcp-phy-autotest == pkgdep-* ]] 00:01:20.186 + cd /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:01:20.186 + source /etc/os-release 00:01:20.186 ++ NAME='Fedora Linux' 00:01:20.186 ++ VERSION='38 (Cloud Edition)' 00:01:20.186 ++ ID=fedora 00:01:20.186 ++ VERSION_ID=38 00:01:20.186 ++ VERSION_CODENAME= 00:01:20.186 ++ PLATFORM_ID=platform:f38 00:01:20.186 ++ PRETTY_NAME='Fedora Linux 38 (Cloud Edition)' 00:01:20.186 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:20.186 ++ LOGO=fedora-logo-icon 00:01:20.186 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:38 00:01:20.186 ++ HOME_URL=https://fedoraproject.org/ 00:01:20.186 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f38/system-administrators-guide/ 00:01:20.186 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:20.186 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:20.186 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:20.186 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=38 00:01:20.186 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:20.186 ++ REDHAT_SUPPORT_PRODUCT_VERSION=38 00:01:20.186 ++ SUPPORT_END=2024-05-14 00:01:20.186 ++ VARIANT='Cloud Edition' 00:01:20.186 ++ VARIANT_ID=cloud 00:01:20.186 + uname -a 00:01:20.186 Linux spdk-wfp-08 6.7.0-68.fc38.x86_64 #1 SMP PREEMPT_DYNAMIC Mon Jan 15 00:59:40 UTC 2024 x86_64 GNU/Linux 00:01:20.186 + sudo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:01:22.835 Hugepages 00:01:22.835 node hugesize free / total 00:01:22.835 node0 1048576kB 0 / 0 00:01:22.835 node0 2048kB 0 / 0 00:01:22.835 node1 1048576kB 0 / 0 00:01:22.835 node1 2048kB 0 / 0 00:01:22.835 00:01:22.835 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:22.835 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:01:22.835 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:01:22.835 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:01:22.835 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:01:22.835 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:01:22.835 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:01:22.835 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:01:22.835 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:01:22.835 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:01:22.835 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:01:22.835 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:01:22.835 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:01:22.835 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:01:22.835 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:01:22.835 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:01:22.835 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:01:22.835 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:01:22.835 + rm -f /tmp/spdk-ld-path 00:01:22.835 + source autorun-spdk.conf 00:01:22.835 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.835 ++ SPDK_TEST_NVMF=1 00:01:22.835 ++ SPDK_TEST_NVME_CLI=1 00:01:22.835 ++ SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.835 ++ SPDK_TEST_NVMF_NICS=e810 00:01:22.835 ++ SPDK_TEST_VFIOUSER=1 00:01:22.835 ++ SPDK_RUN_UBSAN=1 00:01:22.835 ++ NET_TYPE=phy 00:01:22.835 ++ RUN_NIGHTLY=0 00:01:22.835 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:22.835 + [[ -n '' ]] 00:01:22.835 + sudo git config --global --add safe.directory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:22.835 + for M in /var/spdk/build-*-manifest.txt 00:01:22.835 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:22.835 + cp /var/spdk/build-pkg-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:22.835 + for M in /var/spdk/build-*-manifest.txt 00:01:22.835 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:22.835 + cp /var/spdk/build-repo-manifest.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/output/ 00:01:22.835 ++ uname 00:01:22.835 + [[ Linux == \L\i\n\u\x ]] 00:01:22.835 + sudo dmesg -T 00:01:22.835 + sudo dmesg --clear 00:01:22.835 + dmesg_pid=37884 00:01:22.835 + [[ Fedora Linux == FreeBSD ]] 00:01:22.835 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.835 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:22.835 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:22.835 + export VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:22.835 + VM_IMAGE=/var/spdk/dependencies/vhost/spdk_test_image.qcow2 00:01:22.835 + [[ -x /usr/src/fio-static/fio ]] 00:01:22.835 + export FIO_BIN=/usr/src/fio-static/fio 00:01:22.835 + sudo dmesg -Tw 00:01:22.835 + FIO_BIN=/usr/src/fio-static/fio 00:01:22.835 + [[ '' == \/\v\a\r\/\j\e\n\k\i\n\s\/\w\o\r\k\s\p\a\c\e\/\n\v\m\f\-\t\c\p\-\p\h\y\-\a\u\t\o\t\e\s\t\/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:22.835 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:22.835 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:22.835 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.835 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:22.835 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:22.835 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.835 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:22.835 + spdk/autorun.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/autorun-spdk.conf 00:01:22.835 Test configuration: 00:01:22.835 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:22.835 SPDK_TEST_NVMF=1 00:01:22.835 SPDK_TEST_NVME_CLI=1 00:01:22.835 SPDK_TEST_NVMF_TRANSPORT=tcp 00:01:22.835 SPDK_TEST_NVMF_NICS=e810 00:01:22.835 SPDK_TEST_VFIOUSER=1 00:01:22.835 SPDK_RUN_UBSAN=1 00:01:22.835 NET_TYPE=phy 00:01:22.835 RUN_NIGHTLY=0 11:47:09 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:01:22.835 11:47:09 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:22.835 11:47:09 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:22.835 11:47:09 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:22.835 11:47:09 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.836 11:47:09 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.836 11:47:09 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.836 11:47:09 -- paths/export.sh@5 -- $ export PATH 00:01:22.836 11:47:09 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:22.836 11:47:09 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:01:22.836 11:47:09 -- common/autobuild_common.sh@447 -- $ date +%s 00:01:22.836 11:47:09 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721900829.XXXXXX 00:01:22.836 11:47:09 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721900829.kUB5OS 00:01:22.836 11:47:09 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:01:22.836 11:47:09 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:01:22.836 11:47:09 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:01:22.836 11:47:09 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:01:22.836 11:47:09 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:01:22.836 11:47:09 -- common/autobuild_common.sh@463 -- $ get_config_params 00:01:22.836 11:47:09 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:01:22.836 11:47:09 -- common/autotest_common.sh@10 -- $ set +x 00:01:22.836 11:47:09 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:01:22.836 11:47:09 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:01:22.836 11:47:09 -- pm/common@17 -- $ local monitor 00:01:22.836 11:47:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.836 11:47:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.836 11:47:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.836 11:47:09 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:22.836 11:47:09 -- pm/common@25 -- $ sleep 1 00:01:22.836 11:47:09 -- pm/common@21 -- $ date +%s 00:01:22.836 11:47:09 -- pm/common@21 -- $ date +%s 00:01:22.836 11:47:09 -- pm/common@21 -- $ date +%s 00:01:22.836 11:47:09 -- pm/common@21 -- $ date +%s 00:01:22.836 11:47:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721900829 00:01:22.836 11:47:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721900829 00:01:22.836 11:47:09 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721900829 00:01:22.836 11:47:09 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autobuild.sh.1721900829 00:01:22.836 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721900829_collect-vmstat.pm.log 00:01:22.836 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721900829_collect-cpu-load.pm.log 00:01:22.836 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721900829_collect-cpu-temp.pm.log 00:01:22.836 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autobuild.sh.1721900829_collect-bmc-pm.bmc.pm.log 00:01:23.778 11:47:10 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:01:23.778 11:47:10 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:23.778 11:47:10 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:23.778 11:47:10 -- spdk/autobuild.sh@13 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:01:23.778 11:47:10 -- spdk/autobuild.sh@16 -- $ date -u 00:01:23.778 Thu Jul 25 09:47:10 AM UTC 2024 00:01:23.778 11:47:10 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:23.778 v24.09-pre-303-g58883cba9 00:01:23.778 11:47:10 -- spdk/autobuild.sh@19 -- $ '[' 0 -eq 1 ']' 00:01:23.778 11:47:10 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:23.778 11:47:10 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:23.778 11:47:10 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:23.778 11:47:10 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:23.778 11:47:10 -- common/autotest_common.sh@10 -- $ set +x 00:01:23.778 ************************************ 00:01:23.778 START TEST ubsan 00:01:23.778 ************************************ 00:01:23.778 11:47:10 ubsan -- common/autotest_common.sh@1123 -- $ echo 'using ubsan' 00:01:23.778 using ubsan 00:01:23.778 00:01:23.778 real 0m0.000s 00:01:23.778 user 0m0.000s 00:01:23.778 sys 0m0.000s 00:01:23.778 11:47:10 ubsan -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:01:23.778 11:47:10 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:23.778 ************************************ 00:01:23.778 END TEST ubsan 00:01:23.778 ************************************ 00:01:23.778 11:47:11 -- common/autotest_common.sh@1142 -- $ return 0 00:01:23.778 11:47:11 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:23.778 11:47:11 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:23.778 11:47:11 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:23.778 11:47:11 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:23.778 11:47:11 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:23.778 11:47:11 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:23.778 11:47:11 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:23.778 11:47:11 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:23.778 11:47:11 -- spdk/autobuild.sh@67 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user --with-shared 00:01:24.038 Using default SPDK env in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:01:24.038 Using default DPDK in /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:24.298 Using 'verbs' RDMA provider 00:01:37.465 Configuring ISA-L (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal.log)...done. 00:01:47.474 Configuring ISA-L-crypto (logfile: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.spdk-isal-crypto.log)...done. 00:01:47.735 Creating mk/config.mk...done. 00:01:47.735 Creating mk/cc.flags.mk...done. 00:01:47.735 Type 'make' to build. 00:01:47.735 11:47:34 -- spdk/autobuild.sh@69 -- $ run_test make make -j96 00:01:47.735 11:47:34 -- common/autotest_common.sh@1099 -- $ '[' 3 -le 1 ']' 00:01:47.735 11:47:34 -- common/autotest_common.sh@1105 -- $ xtrace_disable 00:01:47.735 11:47:34 -- common/autotest_common.sh@10 -- $ set +x 00:01:47.735 ************************************ 00:01:47.735 START TEST make 00:01:47.735 ************************************ 00:01:47.735 11:47:34 make -- common/autotest_common.sh@1123 -- $ make -j96 00:01:48.305 make[1]: Nothing to be done for 'all'. 00:01:49.694 The Meson build system 00:01:49.694 Version: 1.3.1 00:01:49.694 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user 00:01:49.694 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:49.695 Build type: native build 00:01:49.695 Project name: libvfio-user 00:01:49.695 Project version: 0.0.1 00:01:49.695 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:49.695 C linker for the host machine: cc ld.bfd 2.39-16 00:01:49.695 Host machine cpu family: x86_64 00:01:49.695 Host machine cpu: x86_64 00:01:49.695 Run-time dependency threads found: YES 00:01:49.695 Library dl found: YES 00:01:49.695 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:49.695 Run-time dependency json-c found: YES 0.17 00:01:49.695 Run-time dependency cmocka found: YES 1.1.7 00:01:49.695 Program pytest-3 found: NO 00:01:49.695 Program flake8 found: NO 00:01:49.695 Program misspell-fixer found: NO 00:01:49.695 Program restructuredtext-lint found: NO 00:01:49.695 Program valgrind found: YES (/usr/bin/valgrind) 00:01:49.695 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:49.695 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:49.695 Compiler for C supports arguments -Wwrite-strings: YES 00:01:49.695 ../libvfio-user/test/meson.build:20: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:49.695 Program test-lspci.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-lspci.sh) 00:01:49.695 Program test-linkage.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/libvfio-user/test/test-linkage.sh) 00:01:49.695 ../libvfio-user/test/py/meson.build:16: WARNING: Project targets '>= 0.53.0' but uses feature introduced in '0.57.0': exclude_suites arg in add_test_setup. 00:01:49.695 Build targets in project: 8 00:01:49.695 WARNING: Project specifies a minimum meson_version '>= 0.53.0' but uses features which were added in newer versions: 00:01:49.695 * 0.57.0: {'exclude_suites arg in add_test_setup'} 00:01:49.695 00:01:49.695 libvfio-user 0.0.1 00:01:49.695 00:01:49.695 User defined options 00:01:49.695 buildtype : debug 00:01:49.695 default_library: shared 00:01:49.695 libdir : /usr/local/lib 00:01:49.695 00:01:49.695 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:49.953 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:49.953 [1/37] Compiling C object samples/null.p/null.c.o 00:01:49.953 [2/37] Compiling C object samples/shadow_ioeventfd_server.p/shadow_ioeventfd_server.c.o 00:01:49.953 [3/37] Compiling C object samples/lspci.p/lspci.c.o 00:01:49.953 [4/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci.c.o 00:01:49.953 [5/37] Compiling C object lib/libvfio-user.so.0.0.1.p/irq.c.o 00:01:49.953 [6/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran.c.o 00:01:49.953 [7/37] Compiling C object samples/gpio-pci-idio-16.p/gpio-pci-idio-16.c.o 00:01:49.953 [8/37] Compiling C object samples/client.p/.._lib_tran.c.o 00:01:49.953 [9/37] Compiling C object test/unit_tests.p/.._lib_irq.c.o 00:01:49.953 [10/37] Compiling C object test/unit_tests.p/.._lib_tran.c.o 00:01:49.953 [11/37] Compiling C object test/unit_tests.p/.._lib_migration.c.o 00:01:49.953 [12/37] Compiling C object samples/client.p/.._lib_migration.c.o 00:01:49.953 [13/37] Compiling C object test/unit_tests.p/.._lib_tran_pipe.c.o 00:01:49.953 [14/37] Compiling C object lib/libvfio-user.so.0.0.1.p/migration.c.o 00:01:49.953 [15/37] Compiling C object test/unit_tests.p/mocks.c.o 00:01:49.953 [16/37] Compiling C object test/unit_tests.p/.._lib_pci.c.o 00:01:49.953 [17/37] Compiling C object samples/client.p/.._lib_tran_sock.c.o 00:01:49.953 [18/37] Compiling C object lib/libvfio-user.so.0.0.1.p/dma.c.o 00:01:49.953 [19/37] Compiling C object test/unit_tests.p/.._lib_tran_sock.c.o 00:01:49.953 [20/37] Compiling C object test/unit_tests.p/.._lib_dma.c.o 00:01:49.953 [21/37] Compiling C object lib/libvfio-user.so.0.0.1.p/pci_caps.c.o 00:01:49.953 [22/37] Compiling C object lib/libvfio-user.so.0.0.1.p/tran_sock.c.o 00:01:49.953 [23/37] Compiling C object samples/server.p/server.c.o 00:01:49.953 [24/37] Compiling C object test/unit_tests.p/.._lib_pci_caps.c.o 00:01:49.953 [25/37] Compiling C object test/unit_tests.p/unit-tests.c.o 00:01:49.953 [26/37] Compiling C object lib/libvfio-user.so.0.0.1.p/libvfio-user.c.o 00:01:49.953 [27/37] Compiling C object samples/client.p/client.c.o 00:01:50.211 [28/37] Linking target samples/client 00:01:50.211 [29/37] Linking target lib/libvfio-user.so.0.0.1 00:01:50.211 [30/37] Compiling C object test/unit_tests.p/.._lib_libvfio-user.c.o 00:01:50.211 [31/37] Linking target test/unit_tests 00:01:50.211 [32/37] Generating symbol file lib/libvfio-user.so.0.0.1.p/libvfio-user.so.0.0.1.symbols 00:01:50.211 [33/37] Linking target samples/gpio-pci-idio-16 00:01:50.211 [34/37] Linking target samples/null 00:01:50.211 [35/37] Linking target samples/server 00:01:50.211 [36/37] Linking target samples/shadow_ioeventfd_server 00:01:50.211 [37/37] Linking target samples/lspci 00:01:50.211 INFO: autodetecting backend as ninja 00:01:50.211 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:50.211 DESTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user meson install --quiet -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug 00:01:50.780 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/build-debug' 00:01:50.780 ninja: no work to do. 00:01:56.111 The Meson build system 00:01:56.111 Version: 1.3.1 00:01:56.111 Source dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk 00:01:56.111 Build dir: /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp 00:01:56.111 Build type: native build 00:01:56.111 Program cat found: YES (/usr/bin/cat) 00:01:56.111 Project name: DPDK 00:01:56.111 Project version: 24.03.0 00:01:56.111 C compiler for the host machine: cc (gcc 13.2.1 "cc (GCC) 13.2.1 20231011 (Red Hat 13.2.1-4)") 00:01:56.111 C linker for the host machine: cc ld.bfd 2.39-16 00:01:56.111 Host machine cpu family: x86_64 00:01:56.111 Host machine cpu: x86_64 00:01:56.111 Message: ## Building in Developer Mode ## 00:01:56.111 Program pkg-config found: YES (/usr/bin/pkg-config) 00:01:56.111 Program check-symbols.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/check-symbols.sh) 00:01:56.111 Program options-ibverbs-static.sh found: YES (/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:01:56.111 Program python3 found: YES (/usr/bin/python3) 00:01:56.111 Program cat found: YES (/usr/bin/cat) 00:01:56.111 Compiler for C supports arguments -march=native: YES 00:01:56.111 Checking for size of "void *" : 8 00:01:56.111 Checking for size of "void *" : 8 (cached) 00:01:56.111 Compiler for C supports link arguments -Wl,--undefined-version: NO 00:01:56.111 Library m found: YES 00:01:56.111 Library numa found: YES 00:01:56.111 Has header "numaif.h" : YES 00:01:56.111 Library fdt found: NO 00:01:56.111 Library execinfo found: NO 00:01:56.111 Has header "execinfo.h" : YES 00:01:56.111 Found pkg-config: YES (/usr/bin/pkg-config) 1.8.0 00:01:56.111 Run-time dependency libarchive found: NO (tried pkgconfig) 00:01:56.111 Run-time dependency libbsd found: NO (tried pkgconfig) 00:01:56.111 Run-time dependency jansson found: NO (tried pkgconfig) 00:01:56.111 Run-time dependency openssl found: YES 3.0.9 00:01:56.111 Run-time dependency libpcap found: YES 1.10.4 00:01:56.111 Has header "pcap.h" with dependency libpcap: YES 00:01:56.111 Compiler for C supports arguments -Wcast-qual: YES 00:01:56.111 Compiler for C supports arguments -Wdeprecated: YES 00:01:56.111 Compiler for C supports arguments -Wformat: YES 00:01:56.111 Compiler for C supports arguments -Wformat-nonliteral: NO 00:01:56.111 Compiler for C supports arguments -Wformat-security: NO 00:01:56.111 Compiler for C supports arguments -Wmissing-declarations: YES 00:01:56.111 Compiler for C supports arguments -Wmissing-prototypes: YES 00:01:56.111 Compiler for C supports arguments -Wnested-externs: YES 00:01:56.111 Compiler for C supports arguments -Wold-style-definition: YES 00:01:56.111 Compiler for C supports arguments -Wpointer-arith: YES 00:01:56.111 Compiler for C supports arguments -Wsign-compare: YES 00:01:56.111 Compiler for C supports arguments -Wstrict-prototypes: YES 00:01:56.111 Compiler for C supports arguments -Wundef: YES 00:01:56.111 Compiler for C supports arguments -Wwrite-strings: YES 00:01:56.111 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:01:56.111 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:01:56.111 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:01:56.111 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:01:56.111 Program objdump found: YES (/usr/bin/objdump) 00:01:56.111 Compiler for C supports arguments -mavx512f: YES 00:01:56.111 Checking if "AVX512 checking" compiles: YES 00:01:56.111 Fetching value of define "__SSE4_2__" : 1 00:01:56.111 Fetching value of define "__AES__" : 1 00:01:56.111 Fetching value of define "__AVX__" : 1 00:01:56.111 Fetching value of define "__AVX2__" : 1 00:01:56.111 Fetching value of define "__AVX512BW__" : 1 00:01:56.111 Fetching value of define "__AVX512CD__" : 1 00:01:56.111 Fetching value of define "__AVX512DQ__" : 1 00:01:56.111 Fetching value of define "__AVX512F__" : 1 00:01:56.111 Fetching value of define "__AVX512VL__" : 1 00:01:56.111 Fetching value of define "__PCLMUL__" : 1 00:01:56.111 Fetching value of define "__RDRND__" : 1 00:01:56.111 Fetching value of define "__RDSEED__" : 1 00:01:56.111 Fetching value of define "__VPCLMULQDQ__" : (undefined) 00:01:56.111 Fetching value of define "__znver1__" : (undefined) 00:01:56.111 Fetching value of define "__znver2__" : (undefined) 00:01:56.111 Fetching value of define "__znver3__" : (undefined) 00:01:56.111 Fetching value of define "__znver4__" : (undefined) 00:01:56.112 Compiler for C supports arguments -Wno-format-truncation: YES 00:01:56.112 Message: lib/log: Defining dependency "log" 00:01:56.112 Message: lib/kvargs: Defining dependency "kvargs" 00:01:56.112 Message: lib/telemetry: Defining dependency "telemetry" 00:01:56.112 Checking for function "getentropy" : NO 00:01:56.112 Message: lib/eal: Defining dependency "eal" 00:01:56.112 Message: lib/ring: Defining dependency "ring" 00:01:56.112 Message: lib/rcu: Defining dependency "rcu" 00:01:56.112 Message: lib/mempool: Defining dependency "mempool" 00:01:56.112 Message: lib/mbuf: Defining dependency "mbuf" 00:01:56.112 Fetching value of define "__PCLMUL__" : 1 (cached) 00:01:56.112 Fetching value of define "__AVX512F__" : 1 (cached) 00:01:56.112 Fetching value of define "__AVX512BW__" : 1 (cached) 00:01:56.112 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:01:56.112 Fetching value of define "__AVX512VL__" : 1 (cached) 00:01:56.112 Fetching value of define "__VPCLMULQDQ__" : (undefined) (cached) 00:01:56.112 Compiler for C supports arguments -mpclmul: YES 00:01:56.112 Compiler for C supports arguments -maes: YES 00:01:56.112 Compiler for C supports arguments -mavx512f: YES (cached) 00:01:56.112 Compiler for C supports arguments -mavx512bw: YES 00:01:56.112 Compiler for C supports arguments -mavx512dq: YES 00:01:56.112 Compiler for C supports arguments -mavx512vl: YES 00:01:56.112 Compiler for C supports arguments -mvpclmulqdq: YES 00:01:56.112 Compiler for C supports arguments -mavx2: YES 00:01:56.112 Compiler for C supports arguments -mavx: YES 00:01:56.112 Message: lib/net: Defining dependency "net" 00:01:56.112 Message: lib/meter: Defining dependency "meter" 00:01:56.112 Message: lib/ethdev: Defining dependency "ethdev" 00:01:56.112 Message: lib/pci: Defining dependency "pci" 00:01:56.112 Message: lib/cmdline: Defining dependency "cmdline" 00:01:56.112 Message: lib/hash: Defining dependency "hash" 00:01:56.112 Message: lib/timer: Defining dependency "timer" 00:01:56.112 Message: lib/compressdev: Defining dependency "compressdev" 00:01:56.112 Message: lib/cryptodev: Defining dependency "cryptodev" 00:01:56.112 Message: lib/dmadev: Defining dependency "dmadev" 00:01:56.112 Compiler for C supports arguments -Wno-cast-qual: YES 00:01:56.112 Message: lib/power: Defining dependency "power" 00:01:56.112 Message: lib/reorder: Defining dependency "reorder" 00:01:56.112 Message: lib/security: Defining dependency "security" 00:01:56.112 Has header "linux/userfaultfd.h" : YES 00:01:56.112 Has header "linux/vduse.h" : YES 00:01:56.112 Message: lib/vhost: Defining dependency "vhost" 00:01:56.112 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:01:56.112 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:01:56.112 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:01:56.112 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:01:56.112 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:01:56.112 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:01:56.112 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:01:56.112 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:01:56.112 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:01:56.112 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:01:56.112 Program doxygen found: YES (/usr/bin/doxygen) 00:01:56.112 Configuring doxy-api-html.conf using configuration 00:01:56.112 Configuring doxy-api-man.conf using configuration 00:01:56.112 Program mandb found: YES (/usr/bin/mandb) 00:01:56.112 Program sphinx-build found: NO 00:01:56.112 Configuring rte_build_config.h using configuration 00:01:56.112 Message: 00:01:56.112 ================= 00:01:56.112 Applications Enabled 00:01:56.112 ================= 00:01:56.112 00:01:56.112 apps: 00:01:56.112 00:01:56.112 00:01:56.112 Message: 00:01:56.112 ================= 00:01:56.112 Libraries Enabled 00:01:56.112 ================= 00:01:56.112 00:01:56.112 libs: 00:01:56.112 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:01:56.112 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:01:56.112 cryptodev, dmadev, power, reorder, security, vhost, 00:01:56.112 00:01:56.112 Message: 00:01:56.112 =============== 00:01:56.112 Drivers Enabled 00:01:56.112 =============== 00:01:56.112 00:01:56.112 common: 00:01:56.112 00:01:56.112 bus: 00:01:56.112 pci, vdev, 00:01:56.112 mempool: 00:01:56.112 ring, 00:01:56.112 dma: 00:01:56.112 00:01:56.112 net: 00:01:56.112 00:01:56.112 crypto: 00:01:56.112 00:01:56.112 compress: 00:01:56.112 00:01:56.112 vdpa: 00:01:56.112 00:01:56.112 00:01:56.112 Message: 00:01:56.112 ================= 00:01:56.112 Content Skipped 00:01:56.112 ================= 00:01:56.112 00:01:56.112 apps: 00:01:56.112 dumpcap: explicitly disabled via build config 00:01:56.112 graph: explicitly disabled via build config 00:01:56.112 pdump: explicitly disabled via build config 00:01:56.112 proc-info: explicitly disabled via build config 00:01:56.112 test-acl: explicitly disabled via build config 00:01:56.112 test-bbdev: explicitly disabled via build config 00:01:56.112 test-cmdline: explicitly disabled via build config 00:01:56.112 test-compress-perf: explicitly disabled via build config 00:01:56.112 test-crypto-perf: explicitly disabled via build config 00:01:56.112 test-dma-perf: explicitly disabled via build config 00:01:56.112 test-eventdev: explicitly disabled via build config 00:01:56.112 test-fib: explicitly disabled via build config 00:01:56.112 test-flow-perf: explicitly disabled via build config 00:01:56.112 test-gpudev: explicitly disabled via build config 00:01:56.112 test-mldev: explicitly disabled via build config 00:01:56.112 test-pipeline: explicitly disabled via build config 00:01:56.112 test-pmd: explicitly disabled via build config 00:01:56.112 test-regex: explicitly disabled via build config 00:01:56.112 test-sad: explicitly disabled via build config 00:01:56.112 test-security-perf: explicitly disabled via build config 00:01:56.112 00:01:56.112 libs: 00:01:56.112 argparse: explicitly disabled via build config 00:01:56.112 metrics: explicitly disabled via build config 00:01:56.112 acl: explicitly disabled via build config 00:01:56.112 bbdev: explicitly disabled via build config 00:01:56.112 bitratestats: explicitly disabled via build config 00:01:56.112 bpf: explicitly disabled via build config 00:01:56.112 cfgfile: explicitly disabled via build config 00:01:56.112 distributor: explicitly disabled via build config 00:01:56.112 efd: explicitly disabled via build config 00:01:56.112 eventdev: explicitly disabled via build config 00:01:56.112 dispatcher: explicitly disabled via build config 00:01:56.112 gpudev: explicitly disabled via build config 00:01:56.112 gro: explicitly disabled via build config 00:01:56.112 gso: explicitly disabled via build config 00:01:56.112 ip_frag: explicitly disabled via build config 00:01:56.112 jobstats: explicitly disabled via build config 00:01:56.112 latencystats: explicitly disabled via build config 00:01:56.112 lpm: explicitly disabled via build config 00:01:56.112 member: explicitly disabled via build config 00:01:56.112 pcapng: explicitly disabled via build config 00:01:56.112 rawdev: explicitly disabled via build config 00:01:56.112 regexdev: explicitly disabled via build config 00:01:56.112 mldev: explicitly disabled via build config 00:01:56.112 rib: explicitly disabled via build config 00:01:56.112 sched: explicitly disabled via build config 00:01:56.112 stack: explicitly disabled via build config 00:01:56.112 ipsec: explicitly disabled via build config 00:01:56.112 pdcp: explicitly disabled via build config 00:01:56.112 fib: explicitly disabled via build config 00:01:56.112 port: explicitly disabled via build config 00:01:56.112 pdump: explicitly disabled via build config 00:01:56.112 table: explicitly disabled via build config 00:01:56.112 pipeline: explicitly disabled via build config 00:01:56.112 graph: explicitly disabled via build config 00:01:56.112 node: explicitly disabled via build config 00:01:56.112 00:01:56.112 drivers: 00:01:56.112 common/cpt: not in enabled drivers build config 00:01:56.112 common/dpaax: not in enabled drivers build config 00:01:56.112 common/iavf: not in enabled drivers build config 00:01:56.112 common/idpf: not in enabled drivers build config 00:01:56.112 common/ionic: not in enabled drivers build config 00:01:56.112 common/mvep: not in enabled drivers build config 00:01:56.112 common/octeontx: not in enabled drivers build config 00:01:56.112 bus/auxiliary: not in enabled drivers build config 00:01:56.112 bus/cdx: not in enabled drivers build config 00:01:56.112 bus/dpaa: not in enabled drivers build config 00:01:56.112 bus/fslmc: not in enabled drivers build config 00:01:56.112 bus/ifpga: not in enabled drivers build config 00:01:56.112 bus/platform: not in enabled drivers build config 00:01:56.112 bus/uacce: not in enabled drivers build config 00:01:56.112 bus/vmbus: not in enabled drivers build config 00:01:56.112 common/cnxk: not in enabled drivers build config 00:01:56.112 common/mlx5: not in enabled drivers build config 00:01:56.112 common/nfp: not in enabled drivers build config 00:01:56.112 common/nitrox: not in enabled drivers build config 00:01:56.112 common/qat: not in enabled drivers build config 00:01:56.112 common/sfc_efx: not in enabled drivers build config 00:01:56.112 mempool/bucket: not in enabled drivers build config 00:01:56.112 mempool/cnxk: not in enabled drivers build config 00:01:56.112 mempool/dpaa: not in enabled drivers build config 00:01:56.112 mempool/dpaa2: not in enabled drivers build config 00:01:56.112 mempool/octeontx: not in enabled drivers build config 00:01:56.112 mempool/stack: not in enabled drivers build config 00:01:56.112 dma/cnxk: not in enabled drivers build config 00:01:56.112 dma/dpaa: not in enabled drivers build config 00:01:56.113 dma/dpaa2: not in enabled drivers build config 00:01:56.113 dma/hisilicon: not in enabled drivers build config 00:01:56.113 dma/idxd: not in enabled drivers build config 00:01:56.113 dma/ioat: not in enabled drivers build config 00:01:56.113 dma/skeleton: not in enabled drivers build config 00:01:56.113 net/af_packet: not in enabled drivers build config 00:01:56.113 net/af_xdp: not in enabled drivers build config 00:01:56.113 net/ark: not in enabled drivers build config 00:01:56.113 net/atlantic: not in enabled drivers build config 00:01:56.113 net/avp: not in enabled drivers build config 00:01:56.113 net/axgbe: not in enabled drivers build config 00:01:56.113 net/bnx2x: not in enabled drivers build config 00:01:56.113 net/bnxt: not in enabled drivers build config 00:01:56.113 net/bonding: not in enabled drivers build config 00:01:56.113 net/cnxk: not in enabled drivers build config 00:01:56.113 net/cpfl: not in enabled drivers build config 00:01:56.113 net/cxgbe: not in enabled drivers build config 00:01:56.113 net/dpaa: not in enabled drivers build config 00:01:56.113 net/dpaa2: not in enabled drivers build config 00:01:56.113 net/e1000: not in enabled drivers build config 00:01:56.113 net/ena: not in enabled drivers build config 00:01:56.113 net/enetc: not in enabled drivers build config 00:01:56.113 net/enetfec: not in enabled drivers build config 00:01:56.113 net/enic: not in enabled drivers build config 00:01:56.113 net/failsafe: not in enabled drivers build config 00:01:56.113 net/fm10k: not in enabled drivers build config 00:01:56.113 net/gve: not in enabled drivers build config 00:01:56.113 net/hinic: not in enabled drivers build config 00:01:56.113 net/hns3: not in enabled drivers build config 00:01:56.113 net/i40e: not in enabled drivers build config 00:01:56.113 net/iavf: not in enabled drivers build config 00:01:56.113 net/ice: not in enabled drivers build config 00:01:56.113 net/idpf: not in enabled drivers build config 00:01:56.113 net/igc: not in enabled drivers build config 00:01:56.113 net/ionic: not in enabled drivers build config 00:01:56.113 net/ipn3ke: not in enabled drivers build config 00:01:56.113 net/ixgbe: not in enabled drivers build config 00:01:56.113 net/mana: not in enabled drivers build config 00:01:56.113 net/memif: not in enabled drivers build config 00:01:56.113 net/mlx4: not in enabled drivers build config 00:01:56.113 net/mlx5: not in enabled drivers build config 00:01:56.113 net/mvneta: not in enabled drivers build config 00:01:56.113 net/mvpp2: not in enabled drivers build config 00:01:56.113 net/netvsc: not in enabled drivers build config 00:01:56.113 net/nfb: not in enabled drivers build config 00:01:56.113 net/nfp: not in enabled drivers build config 00:01:56.113 net/ngbe: not in enabled drivers build config 00:01:56.113 net/null: not in enabled drivers build config 00:01:56.113 net/octeontx: not in enabled drivers build config 00:01:56.113 net/octeon_ep: not in enabled drivers build config 00:01:56.113 net/pcap: not in enabled drivers build config 00:01:56.113 net/pfe: not in enabled drivers build config 00:01:56.113 net/qede: not in enabled drivers build config 00:01:56.113 net/ring: not in enabled drivers build config 00:01:56.113 net/sfc: not in enabled drivers build config 00:01:56.113 net/softnic: not in enabled drivers build config 00:01:56.113 net/tap: not in enabled drivers build config 00:01:56.113 net/thunderx: not in enabled drivers build config 00:01:56.113 net/txgbe: not in enabled drivers build config 00:01:56.113 net/vdev_netvsc: not in enabled drivers build config 00:01:56.113 net/vhost: not in enabled drivers build config 00:01:56.113 net/virtio: not in enabled drivers build config 00:01:56.113 net/vmxnet3: not in enabled drivers build config 00:01:56.113 raw/*: missing internal dependency, "rawdev" 00:01:56.113 crypto/armv8: not in enabled drivers build config 00:01:56.113 crypto/bcmfs: not in enabled drivers build config 00:01:56.113 crypto/caam_jr: not in enabled drivers build config 00:01:56.113 crypto/ccp: not in enabled drivers build config 00:01:56.113 crypto/cnxk: not in enabled drivers build config 00:01:56.113 crypto/dpaa_sec: not in enabled drivers build config 00:01:56.113 crypto/dpaa2_sec: not in enabled drivers build config 00:01:56.113 crypto/ipsec_mb: not in enabled drivers build config 00:01:56.113 crypto/mlx5: not in enabled drivers build config 00:01:56.113 crypto/mvsam: not in enabled drivers build config 00:01:56.113 crypto/nitrox: not in enabled drivers build config 00:01:56.113 crypto/null: not in enabled drivers build config 00:01:56.113 crypto/octeontx: not in enabled drivers build config 00:01:56.113 crypto/openssl: not in enabled drivers build config 00:01:56.113 crypto/scheduler: not in enabled drivers build config 00:01:56.113 crypto/uadk: not in enabled drivers build config 00:01:56.113 crypto/virtio: not in enabled drivers build config 00:01:56.113 compress/isal: not in enabled drivers build config 00:01:56.113 compress/mlx5: not in enabled drivers build config 00:01:56.113 compress/nitrox: not in enabled drivers build config 00:01:56.113 compress/octeontx: not in enabled drivers build config 00:01:56.113 compress/zlib: not in enabled drivers build config 00:01:56.113 regex/*: missing internal dependency, "regexdev" 00:01:56.113 ml/*: missing internal dependency, "mldev" 00:01:56.113 vdpa/ifc: not in enabled drivers build config 00:01:56.113 vdpa/mlx5: not in enabled drivers build config 00:01:56.113 vdpa/nfp: not in enabled drivers build config 00:01:56.113 vdpa/sfc: not in enabled drivers build config 00:01:56.113 event/*: missing internal dependency, "eventdev" 00:01:56.113 baseband/*: missing internal dependency, "bbdev" 00:01:56.113 gpu/*: missing internal dependency, "gpudev" 00:01:56.113 00:01:56.113 00:01:56.113 Build targets in project: 85 00:01:56.113 00:01:56.113 DPDK 24.03.0 00:01:56.113 00:01:56.113 User defined options 00:01:56.113 buildtype : debug 00:01:56.113 default_library : shared 00:01:56.113 libdir : lib 00:01:56.113 prefix : /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:01:56.113 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:01:56.113 c_link_args : 00:01:56.113 cpu_instruction_set: native 00:01:56.113 disable_apps : test-sad,test-acl,test-dma-perf,test-pipeline,test-compress-perf,test-fib,test-flow-perf,test-crypto-perf,test-bbdev,test-eventdev,pdump,test-mldev,test-cmdline,graph,test-security-perf,test-pmd,test,proc-info,test-regex,dumpcap,test-gpudev 00:01:56.113 disable_libs : port,sched,rib,node,ipsec,distributor,gro,eventdev,pdcp,acl,member,latencystats,efd,stack,regexdev,rawdev,bpf,metrics,gpudev,pipeline,pdump,table,fib,dispatcher,mldev,gso,cfgfile,bitratestats,ip_frag,graph,lpm,jobstats,argparse,pcapng,bbdev 00:01:56.113 enable_docs : false 00:01:56.113 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring 00:01:56.113 enable_kmods : false 00:01:56.113 max_lcores : 128 00:01:56.113 tests : false 00:01:56.113 00:01:56.113 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:01:56.113 ninja: Entering directory `/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp' 00:01:56.113 [1/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:01:56.113 [2/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:01:56.113 [3/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:01:56.113 [4/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:01:56.113 [5/268] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:01:56.113 [6/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:01:56.113 [7/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:01:56.113 [8/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:01:56.113 [9/268] Linking static target lib/librte_kvargs.a 00:01:56.113 [10/268] Compiling C object lib/librte_log.a.p/log_log.c.o 00:01:56.113 [11/268] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:01:56.113 [12/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:01:56.113 [13/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:01:56.113 [14/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:01:56.113 [15/268] Linking static target lib/librte_log.a 00:01:56.113 [16/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:01:56.113 [17/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:01:56.113 [18/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:01:56.373 [19/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:01:56.373 [20/268] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:01:56.373 [21/268] Linking static target lib/librte_pci.a 00:01:56.373 [22/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:01:56.373 [23/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:01:56.373 [24/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:01:56.373 [25/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:01:56.373 [26/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:01:56.633 [27/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:01:56.633 [28/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:01:56.634 [29/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:01:56.634 [30/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:01:56.634 [31/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:01:56.634 [32/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:01:56.634 [33/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:01:56.634 [34/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:01:56.634 [35/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:01:56.634 [36/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:01:56.634 [37/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:01:56.634 [38/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:01:56.634 [39/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:01:56.634 [40/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:01:56.634 [41/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:01:56.634 [42/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:01:56.634 [43/268] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:01:56.634 [44/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:01:56.634 [45/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:01:56.634 [46/268] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:01:56.634 [47/268] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:01:56.634 [48/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:01:56.634 [49/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:01:56.634 [50/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:01:56.634 [51/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:01:56.634 [52/268] Linking static target lib/librte_meter.a 00:01:56.634 [53/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:01:56.634 [54/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:01:56.634 [55/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:01:56.634 [56/268] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:01:56.634 [57/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:01:56.634 [58/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:01:56.634 [59/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:01:56.634 [60/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:01:56.634 [61/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:01:56.634 [62/268] Linking static target lib/librte_ring.a 00:01:56.634 [63/268] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:01:56.634 [64/268] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:01:56.634 [65/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:01:56.634 [66/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:01:56.634 [67/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:01:56.634 [68/268] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:01:56.634 [69/268] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:01:56.634 [70/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:01:56.634 [71/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:01:56.634 [72/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:01:56.634 [73/268] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:01:56.634 [74/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:01:56.634 [75/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:01:56.634 [76/268] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.634 [77/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:01:56.634 [78/268] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:01:56.634 [79/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:01:56.634 [80/268] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:01:56.634 [81/268] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:01:56.634 [82/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:01:56.634 [83/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:01:56.634 [84/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:01:56.634 [85/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:01:56.634 [86/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:01:56.634 [87/268] Compiling C object lib/net/libnet_crc_avx512_lib.a.p/net_crc_avx512.c.o 00:01:56.634 [88/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:01:56.634 [89/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:01:56.634 [90/268] Linking static target lib/net/libnet_crc_avx512_lib.a 00:01:56.634 [91/268] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:01:56.634 [92/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:01:56.634 [93/268] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:01:56.634 [94/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:01:56.634 [95/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:01:56.634 [96/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:01:56.634 [97/268] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:01:56.634 [98/268] Linking static target lib/librte_telemetry.a 00:01:56.634 [99/268] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:01:56.634 [100/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:01:56.634 [101/268] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.634 [102/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:01:56.634 [103/268] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:01:56.634 [104/268] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:01:56.634 [105/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:01:56.634 [106/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:01:56.634 [107/268] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:01:56.634 [108/268] Linking static target lib/librte_net.a 00:01:56.634 [109/268] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:01:56.893 [110/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:01:56.893 [111/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:01:56.893 [112/268] Linking static target lib/librte_mempool.a 00:01:56.893 [113/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:01:56.893 [114/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:01:56.893 [115/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:01:56.893 [116/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:01:56.893 [117/268] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:01:56.893 [118/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:01:56.893 [119/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:01:56.893 [120/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:01:56.893 [121/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:01:56.893 [122/268] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:01:56.893 [123/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:01:56.893 [124/268] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:01:56.893 [125/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:01:56.893 [126/268] Linking static target lib/librte_rcu.a 00:01:56.893 [127/268] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:01:56.893 [128/268] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:01:56.893 [129/268] Linking static target lib/librte_eal.a 00:01:56.893 [130/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:01:56.893 [131/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:01:56.893 [132/268] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.893 [133/268] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:01:56.893 [134/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:01:56.893 [135/268] Linking static target lib/librte_cmdline.a 00:01:56.893 [136/268] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.893 [137/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:01:56.893 [138/268] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:01:56.893 [139/268] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:01:56.893 [140/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:01:56.893 [141/268] Linking static target lib/librte_mbuf.a 00:01:56.893 [142/268] Linking target lib/librte_log.so.24.1 00:01:56.893 [143/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:01:56.893 [144/268] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:01:56.893 [145/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:01:56.893 [146/268] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:01:56.893 [147/268] Linking static target lib/librte_timer.a 00:01:57.152 [148/268] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:01:57.152 [149/268] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.152 [150/268] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:01:57.152 [151/268] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:01:57.152 [152/268] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:01:57.152 [153/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:01:57.152 [154/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:01:57.152 [155/268] Linking static target lib/librte_dmadev.a 00:01:57.152 [156/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:01:57.152 [157/268] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:01:57.152 [158/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:01:57.152 [159/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:01:57.152 [160/268] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:01:57.152 [161/268] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.152 [162/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:01:57.152 [163/268] Linking target lib/librte_kvargs.so.24.1 00:01:57.152 [164/268] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.152 [165/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:01:57.152 [166/268] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:01:57.152 [167/268] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:01:57.152 [168/268] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:01:57.152 [169/268] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:01:57.152 [170/268] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:01:57.152 [171/268] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:01:57.152 [172/268] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:01:57.152 [173/268] Linking static target drivers/libtmp_rte_mempool_ring.a 00:01:57.152 [174/268] Linking static target lib/librte_power.a 00:01:57.152 [175/268] Linking target lib/librte_telemetry.so.24.1 00:01:57.152 [176/268] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:01:57.152 [177/268] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:01:57.152 [178/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:01:57.152 [179/268] Linking static target lib/librte_security.a 00:01:57.152 [180/268] Linking static target lib/librte_compressdev.a 00:01:57.152 [181/268] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:01:57.152 [182/268] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:01:57.152 [183/268] Linking static target drivers/libtmp_rte_bus_vdev.a 00:01:57.152 [184/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:01:57.152 [185/268] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:01:57.152 [186/268] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:01:57.412 [187/268] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:01:57.412 [188/268] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:01:57.412 [189/268] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:01:57.412 [190/268] Linking static target lib/librte_hash.a 00:01:57.412 [191/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:01:57.412 [192/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:01:57.412 [193/268] Linking static target lib/librte_reorder.a 00:01:57.412 [194/268] Linking static target drivers/libtmp_rte_bus_pci.a 00:01:57.412 [195/268] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:01:57.412 [196/268] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:01:57.412 [197/268] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:01:57.412 [198/268] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:01:57.412 [199/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:01:57.412 [200/268] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:57.412 [201/268] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:01:57.412 [202/268] Linking static target drivers/librte_mempool_ring.a 00:01:57.412 [203/268] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:01:57.412 [204/268] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.412 [205/268] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:01:57.412 [206/268] Linking static target lib/librte_cryptodev.a 00:01:57.412 [207/268] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:57.412 [208/268] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:01:57.412 [209/268] Linking static target drivers/librte_bus_vdev.a 00:01:57.412 [210/268] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.412 [211/268] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:01:57.412 [212/268] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:57.412 [213/268] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:01:57.671 [214/268] Linking static target drivers/librte_bus_pci.a 00:01:57.671 [215/268] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.671 [216/268] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.671 [217/268] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.671 [218/268] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.931 [219/268] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.931 [220/268] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.931 [221/268] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:01:57.931 [222/268] Linking static target lib/librte_ethdev.a 00:01:57.931 [223/268] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:01:57.931 [224/268] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.190 [225/268] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:01:58.190 [226/268] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:01:58.190 [227/268] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:01:59.130 [228/268] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:01:59.130 [229/268] Linking static target lib/librte_vhost.a 00:01:59.130 [230/268] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:01.040 [231/268] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.320 [232/268] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.320 [233/268] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:06.320 [234/268] Linking target lib/librte_eal.so.24.1 00:02:06.580 [235/268] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:06.580 [236/268] Linking target lib/librte_timer.so.24.1 00:02:06.580 [237/268] Linking target drivers/librte_bus_vdev.so.24.1 00:02:06.580 [238/268] Linking target lib/librte_ring.so.24.1 00:02:06.580 [239/268] Linking target lib/librte_meter.so.24.1 00:02:06.580 [240/268] Linking target lib/librte_pci.so.24.1 00:02:06.580 [241/268] Linking target lib/librte_dmadev.so.24.1 00:02:06.840 [242/268] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:06.841 [243/268] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:06.841 [244/268] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:06.841 [245/268] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:06.841 [246/268] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:06.841 [247/268] Linking target lib/librte_mempool.so.24.1 00:02:06.841 [248/268] Linking target lib/librte_rcu.so.24.1 00:02:06.841 [249/268] Linking target drivers/librte_bus_pci.so.24.1 00:02:06.841 [250/268] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:06.841 [251/268] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:07.101 [252/268] Linking target drivers/librte_mempool_ring.so.24.1 00:02:07.101 [253/268] Linking target lib/librte_mbuf.so.24.1 00:02:07.101 [254/268] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:07.101 [255/268] Linking target lib/librte_reorder.so.24.1 00:02:07.101 [256/268] Linking target lib/librte_compressdev.so.24.1 00:02:07.101 [257/268] Linking target lib/librte_net.so.24.1 00:02:07.101 [258/268] Linking target lib/librte_cryptodev.so.24.1 00:02:07.421 [259/268] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:07.421 [260/268] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:07.421 [261/268] Linking target lib/librte_cmdline.so.24.1 00:02:07.421 [262/268] Linking target lib/librte_security.so.24.1 00:02:07.421 [263/268] Linking target lib/librte_hash.so.24.1 00:02:07.421 [264/268] Linking target lib/librte_ethdev.so.24.1 00:02:07.421 [265/268] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:07.421 [266/268] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:07.421 [267/268] Linking target lib/librte_power.so.24.1 00:02:07.421 [268/268] Linking target lib/librte_vhost.so.24.1 00:02:07.421 INFO: autodetecting backend as ninja 00:02:07.421 INFO: calculating backend command to run: /usr/local/bin/ninja -C /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build-tmp -j 96 00:02:08.364 CC lib/ut_mock/mock.o 00:02:08.623 CC lib/log/log.o 00:02:08.624 CC lib/log/log_deprecated.o 00:02:08.624 CC lib/log/log_flags.o 00:02:08.624 CC lib/ut/ut.o 00:02:08.624 LIB libspdk_ut_mock.a 00:02:08.624 LIB libspdk_ut.a 00:02:08.624 LIB libspdk_log.a 00:02:08.624 SO libspdk_ut_mock.so.6.0 00:02:08.624 SO libspdk_ut.so.2.0 00:02:08.624 SO libspdk_log.so.7.0 00:02:08.624 SYMLINK libspdk_ut_mock.so 00:02:08.624 SYMLINK libspdk_ut.so 00:02:08.883 SYMLINK libspdk_log.so 00:02:09.144 CC lib/ioat/ioat.o 00:02:09.144 CXX lib/trace_parser/trace.o 00:02:09.144 CC lib/util/base64.o 00:02:09.144 CC lib/util/bit_array.o 00:02:09.144 CC lib/util/crc16.o 00:02:09.144 CC lib/util/cpuset.o 00:02:09.144 CC lib/util/crc32.o 00:02:09.144 CC lib/util/crc32c.o 00:02:09.144 CC lib/util/crc32_ieee.o 00:02:09.144 CC lib/util/crc64.o 00:02:09.144 CC lib/dma/dma.o 00:02:09.144 CC lib/util/dif.o 00:02:09.144 CC lib/util/fd.o 00:02:09.144 CC lib/util/fd_group.o 00:02:09.144 CC lib/util/file.o 00:02:09.144 CC lib/util/hexlify.o 00:02:09.144 CC lib/util/iov.o 00:02:09.144 CC lib/util/math.o 00:02:09.144 CC lib/util/pipe.o 00:02:09.144 CC lib/util/net.o 00:02:09.144 CC lib/util/strerror_tls.o 00:02:09.144 CC lib/util/string.o 00:02:09.144 CC lib/util/uuid.o 00:02:09.144 CC lib/util/xor.o 00:02:09.144 CC lib/util/zipf.o 00:02:09.144 CC lib/vfio_user/host/vfio_user.o 00:02:09.144 CC lib/vfio_user/host/vfio_user_pci.o 00:02:09.144 LIB libspdk_ioat.a 00:02:09.144 LIB libspdk_dma.a 00:02:09.403 SO libspdk_ioat.so.7.0 00:02:09.403 SO libspdk_dma.so.4.0 00:02:09.403 SYMLINK libspdk_ioat.so 00:02:09.403 SYMLINK libspdk_dma.so 00:02:09.403 LIB libspdk_vfio_user.a 00:02:09.403 SO libspdk_vfio_user.so.5.0 00:02:09.403 LIB libspdk_util.a 00:02:09.663 SYMLINK libspdk_vfio_user.so 00:02:09.663 SO libspdk_util.so.10.0 00:02:09.663 SYMLINK libspdk_util.so 00:02:09.663 LIB libspdk_trace_parser.a 00:02:09.663 SO libspdk_trace_parser.so.5.0 00:02:09.921 SYMLINK libspdk_trace_parser.so 00:02:09.921 CC lib/vmd/vmd.o 00:02:09.921 CC lib/vmd/led.o 00:02:09.921 CC lib/rdma_utils/rdma_utils.o 00:02:09.921 CC lib/idxd/idxd_user.o 00:02:09.921 CC lib/idxd/idxd.o 00:02:09.921 CC lib/conf/conf.o 00:02:09.921 CC lib/rdma_provider/common.o 00:02:09.921 CC lib/idxd/idxd_kernel.o 00:02:09.921 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:09.921 CC lib/json/json_parse.o 00:02:09.921 CC lib/json/json_util.o 00:02:09.921 CC lib/json/json_write.o 00:02:09.921 CC lib/env_dpdk/env.o 00:02:09.921 CC lib/env_dpdk/pci.o 00:02:09.921 CC lib/env_dpdk/memory.o 00:02:09.921 CC lib/env_dpdk/init.o 00:02:09.921 CC lib/env_dpdk/threads.o 00:02:09.921 CC lib/env_dpdk/pci_ioat.o 00:02:09.921 CC lib/env_dpdk/pci_virtio.o 00:02:09.921 CC lib/env_dpdk/pci_vmd.o 00:02:09.921 CC lib/env_dpdk/pci_idxd.o 00:02:09.921 CC lib/env_dpdk/pci_event.o 00:02:09.921 CC lib/env_dpdk/sigbus_handler.o 00:02:09.921 CC lib/env_dpdk/pci_dpdk.o 00:02:09.921 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:09.921 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:10.179 LIB libspdk_rdma_provider.a 00:02:10.179 SO libspdk_rdma_provider.so.6.0 00:02:10.179 LIB libspdk_conf.a 00:02:10.179 SO libspdk_conf.so.6.0 00:02:10.179 LIB libspdk_rdma_utils.a 00:02:10.179 SO libspdk_rdma_utils.so.1.0 00:02:10.179 SYMLINK libspdk_rdma_provider.so 00:02:10.179 LIB libspdk_json.a 00:02:10.179 SYMLINK libspdk_conf.so 00:02:10.179 SO libspdk_json.so.6.0 00:02:10.179 SYMLINK libspdk_rdma_utils.so 00:02:10.438 SYMLINK libspdk_json.so 00:02:10.438 LIB libspdk_idxd.a 00:02:10.438 LIB libspdk_vmd.a 00:02:10.438 SO libspdk_idxd.so.12.0 00:02:10.438 SO libspdk_vmd.so.6.0 00:02:10.438 SYMLINK libspdk_idxd.so 00:02:10.438 SYMLINK libspdk_vmd.so 00:02:10.698 CC lib/jsonrpc/jsonrpc_server.o 00:02:10.698 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:10.698 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:10.698 CC lib/jsonrpc/jsonrpc_client.o 00:02:10.958 LIB libspdk_jsonrpc.a 00:02:10.958 SO libspdk_jsonrpc.so.6.0 00:02:10.958 SYMLINK libspdk_jsonrpc.so 00:02:10.958 LIB libspdk_env_dpdk.a 00:02:10.958 SO libspdk_env_dpdk.so.15.0 00:02:11.216 SYMLINK libspdk_env_dpdk.so 00:02:11.216 CC lib/rpc/rpc.o 00:02:11.476 LIB libspdk_rpc.a 00:02:11.476 SO libspdk_rpc.so.6.0 00:02:11.476 SYMLINK libspdk_rpc.so 00:02:11.735 CC lib/trace/trace.o 00:02:11.735 CC lib/trace/trace_flags.o 00:02:11.735 CC lib/trace/trace_rpc.o 00:02:11.735 CC lib/keyring/keyring.o 00:02:11.735 CC lib/keyring/keyring_rpc.o 00:02:11.735 CC lib/notify/notify.o 00:02:11.735 CC lib/notify/notify_rpc.o 00:02:11.997 LIB libspdk_notify.a 00:02:11.997 LIB libspdk_trace.a 00:02:11.997 SO libspdk_notify.so.6.0 00:02:11.997 LIB libspdk_keyring.a 00:02:11.997 SO libspdk_trace.so.10.0 00:02:11.997 SO libspdk_keyring.so.1.0 00:02:11.997 SYMLINK libspdk_notify.so 00:02:11.997 SYMLINK libspdk_trace.so 00:02:11.997 SYMLINK libspdk_keyring.so 00:02:12.296 CC lib/sock/sock.o 00:02:12.296 CC lib/sock/sock_rpc.o 00:02:12.296 CC lib/thread/thread.o 00:02:12.296 CC lib/thread/iobuf.o 00:02:12.864 LIB libspdk_sock.a 00:02:12.864 SO libspdk_sock.so.10.0 00:02:12.864 SYMLINK libspdk_sock.so 00:02:13.125 CC lib/nvme/nvme_ctrlr_cmd.o 00:02:13.125 CC lib/nvme/nvme_fabric.o 00:02:13.125 CC lib/nvme/nvme_ctrlr.o 00:02:13.125 CC lib/nvme/nvme_ns.o 00:02:13.125 CC lib/nvme/nvme_ns_cmd.o 00:02:13.125 CC lib/nvme/nvme_pcie_common.o 00:02:13.125 CC lib/nvme/nvme.o 00:02:13.125 CC lib/nvme/nvme_pcie.o 00:02:13.125 CC lib/nvme/nvme_qpair.o 00:02:13.125 CC lib/nvme/nvme_transport.o 00:02:13.125 CC lib/nvme/nvme_quirks.o 00:02:13.125 CC lib/nvme/nvme_discovery.o 00:02:13.125 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:02:13.125 CC lib/nvme/nvme_tcp.o 00:02:13.125 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:02:13.125 CC lib/nvme/nvme_opal.o 00:02:13.125 CC lib/nvme/nvme_io_msg.o 00:02:13.125 CC lib/nvme/nvme_poll_group.o 00:02:13.125 CC lib/nvme/nvme_zns.o 00:02:13.125 CC lib/nvme/nvme_stubs.o 00:02:13.125 CC lib/nvme/nvme_auth.o 00:02:13.125 CC lib/nvme/nvme_cuse.o 00:02:13.125 CC lib/nvme/nvme_vfio_user.o 00:02:13.125 CC lib/nvme/nvme_rdma.o 00:02:13.383 LIB libspdk_thread.a 00:02:13.642 SO libspdk_thread.so.10.1 00:02:13.642 SYMLINK libspdk_thread.so 00:02:13.900 CC lib/init/json_config.o 00:02:13.900 CC lib/init/subsystem_rpc.o 00:02:13.900 CC lib/init/subsystem.o 00:02:13.900 CC lib/accel/accel_rpc.o 00:02:13.900 CC lib/init/rpc.o 00:02:13.900 CC lib/accel/accel.o 00:02:13.900 CC lib/accel/accel_sw.o 00:02:13.900 CC lib/virtio/virtio_vfio_user.o 00:02:13.900 CC lib/virtio/virtio.o 00:02:13.900 CC lib/virtio/virtio_vhost_user.o 00:02:13.900 CC lib/virtio/virtio_pci.o 00:02:13.900 CC lib/blob/blobstore.o 00:02:13.900 CC lib/blob/request.o 00:02:13.900 CC lib/blob/zeroes.o 00:02:13.900 CC lib/vfu_tgt/tgt_rpc.o 00:02:13.900 CC lib/blob/blob_bs_dev.o 00:02:13.900 CC lib/vfu_tgt/tgt_endpoint.o 00:02:14.159 LIB libspdk_init.a 00:02:14.159 SO libspdk_init.so.5.0 00:02:14.159 LIB libspdk_virtio.a 00:02:14.159 LIB libspdk_vfu_tgt.a 00:02:14.159 SYMLINK libspdk_init.so 00:02:14.159 SO libspdk_virtio.so.7.0 00:02:14.159 SO libspdk_vfu_tgt.so.3.0 00:02:14.159 SYMLINK libspdk_virtio.so 00:02:14.159 SYMLINK libspdk_vfu_tgt.so 00:02:14.419 CC lib/event/app.o 00:02:14.419 CC lib/event/reactor.o 00:02:14.419 CC lib/event/log_rpc.o 00:02:14.419 CC lib/event/app_rpc.o 00:02:14.419 CC lib/event/scheduler_static.o 00:02:14.678 LIB libspdk_accel.a 00:02:14.679 SO libspdk_accel.so.16.0 00:02:14.679 SYMLINK libspdk_accel.so 00:02:14.679 LIB libspdk_nvme.a 00:02:14.679 LIB libspdk_event.a 00:02:14.679 SO libspdk_event.so.14.0 00:02:14.679 SO libspdk_nvme.so.13.1 00:02:14.938 SYMLINK libspdk_event.so 00:02:14.938 CC lib/bdev/bdev.o 00:02:14.938 CC lib/bdev/bdev_rpc.o 00:02:14.938 CC lib/bdev/bdev_zone.o 00:02:14.938 CC lib/bdev/part.o 00:02:14.938 CC lib/bdev/scsi_nvme.o 00:02:14.938 SYMLINK libspdk_nvme.so 00:02:15.878 LIB libspdk_blob.a 00:02:15.878 SO libspdk_blob.so.11.0 00:02:16.137 SYMLINK libspdk_blob.so 00:02:16.397 CC lib/lvol/lvol.o 00:02:16.397 CC lib/blobfs/blobfs.o 00:02:16.397 CC lib/blobfs/tree.o 00:02:16.657 LIB libspdk_bdev.a 00:02:16.657 SO libspdk_bdev.so.16.0 00:02:16.917 SYMLINK libspdk_bdev.so 00:02:16.917 LIB libspdk_blobfs.a 00:02:16.917 SO libspdk_blobfs.so.10.0 00:02:16.917 LIB libspdk_lvol.a 00:02:16.917 SO libspdk_lvol.so.10.0 00:02:16.917 SYMLINK libspdk_blobfs.so 00:02:17.177 SYMLINK libspdk_lvol.so 00:02:17.177 CC lib/ftl/ftl_core.o 00:02:17.177 CC lib/ftl/ftl_init.o 00:02:17.177 CC lib/nvmf/ctrlr.o 00:02:17.177 CC lib/ftl/ftl_layout.o 00:02:17.177 CC lib/ftl/ftl_debug.o 00:02:17.177 CC lib/nvmf/ctrlr_discovery.o 00:02:17.177 CC lib/ftl/ftl_io.o 00:02:17.177 CC lib/nvmf/ctrlr_bdev.o 00:02:17.177 CC lib/ftl/ftl_sb.o 00:02:17.177 CC lib/nvmf/subsystem.o 00:02:17.177 CC lib/nvmf/nvmf_rpc.o 00:02:17.178 CC lib/ftl/ftl_l2p.o 00:02:17.178 CC lib/nvmf/nvmf.o 00:02:17.178 CC lib/ftl/ftl_l2p_flat.o 00:02:17.178 CC lib/nvmf/tcp.o 00:02:17.178 CC lib/nvmf/stubs.o 00:02:17.178 CC lib/ftl/ftl_nv_cache.o 00:02:17.178 CC lib/nvmf/transport.o 00:02:17.178 CC lib/ftl/ftl_band.o 00:02:17.178 CC lib/nbd/nbd_rpc.o 00:02:17.178 CC lib/ftl/ftl_band_ops.o 00:02:17.178 CC lib/nbd/nbd.o 00:02:17.178 CC lib/nvmf/mdns_server.o 00:02:17.178 CC lib/ftl/ftl_writer.o 00:02:17.178 CC lib/nvmf/vfio_user.o 00:02:17.178 CC lib/nvmf/rdma.o 00:02:17.178 CC lib/ftl/ftl_rq.o 00:02:17.178 CC lib/ublk/ublk.o 00:02:17.178 CC lib/ftl/ftl_l2p_cache.o 00:02:17.178 CC lib/nvmf/auth.o 00:02:17.178 CC lib/ftl/ftl_reloc.o 00:02:17.178 CC lib/ftl/ftl_p2l.o 00:02:17.178 CC lib/ftl/mngt/ftl_mngt.o 00:02:17.178 CC lib/ftl/mngt/ftl_mngt_startup.o 00:02:17.178 CC lib/ublk/ublk_rpc.o 00:02:17.178 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:02:17.178 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:02:17.178 CC lib/ftl/mngt/ftl_mngt_misc.o 00:02:17.178 CC lib/ftl/mngt/ftl_mngt_md.o 00:02:17.178 CC lib/scsi/dev.o 00:02:17.178 CC lib/scsi/lun.o 00:02:17.178 CC lib/scsi/port.o 00:02:17.178 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:02:17.178 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:02:17.178 CC lib/scsi/scsi.o 00:02:17.178 CC lib/ftl/mngt/ftl_mngt_band.o 00:02:17.178 CC lib/scsi/scsi_pr.o 00:02:17.178 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:02:17.178 CC lib/scsi/scsi_bdev.o 00:02:17.178 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:02:17.178 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:02:17.178 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:02:17.178 CC lib/scsi/scsi_rpc.o 00:02:17.178 CC lib/ftl/utils/ftl_md.o 00:02:17.178 CC lib/ftl/utils/ftl_conf.o 00:02:17.178 CC lib/scsi/task.o 00:02:17.178 CC lib/ftl/utils/ftl_mempool.o 00:02:17.178 CC lib/ftl/utils/ftl_property.o 00:02:17.178 CC lib/ftl/utils/ftl_bitmap.o 00:02:17.178 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:02:17.178 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:02:17.178 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:02:17.178 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:02:17.178 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:02:17.178 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:02:17.178 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:02:17.178 CC lib/ftl/upgrade/ftl_sb_v3.o 00:02:17.178 CC lib/ftl/upgrade/ftl_sb_v5.o 00:02:17.178 CC lib/ftl/nvc/ftl_nvc_dev.o 00:02:17.178 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:02:17.178 CC lib/ftl/base/ftl_base_dev.o 00:02:17.178 CC lib/ftl/base/ftl_base_bdev.o 00:02:17.178 CC lib/ftl/ftl_trace.o 00:02:17.747 LIB libspdk_scsi.a 00:02:17.747 LIB libspdk_nbd.a 00:02:17.747 SO libspdk_nbd.so.7.0 00:02:17.747 SO libspdk_scsi.so.9.0 00:02:17.747 SYMLINK libspdk_nbd.so 00:02:17.747 LIB libspdk_ublk.a 00:02:18.007 SYMLINK libspdk_scsi.so 00:02:18.007 SO libspdk_ublk.so.3.0 00:02:18.007 SYMLINK libspdk_ublk.so 00:02:18.007 LIB libspdk_ftl.a 00:02:18.267 CC lib/vhost/vhost.o 00:02:18.267 CC lib/vhost/vhost_scsi.o 00:02:18.267 CC lib/vhost/vhost_rpc.o 00:02:18.267 CC lib/vhost/vhost_blk.o 00:02:18.267 CC lib/vhost/rte_vhost_user.o 00:02:18.267 CC lib/iscsi/conn.o 00:02:18.267 CC lib/iscsi/init_grp.o 00:02:18.267 CC lib/iscsi/iscsi.o 00:02:18.267 CC lib/iscsi/md5.o 00:02:18.267 CC lib/iscsi/param.o 00:02:18.267 CC lib/iscsi/portal_grp.o 00:02:18.267 CC lib/iscsi/tgt_node.o 00:02:18.267 CC lib/iscsi/iscsi_subsystem.o 00:02:18.267 CC lib/iscsi/iscsi_rpc.o 00:02:18.267 CC lib/iscsi/task.o 00:02:18.267 SO libspdk_ftl.so.9.0 00:02:18.526 SYMLINK libspdk_ftl.so 00:02:18.786 LIB libspdk_nvmf.a 00:02:18.786 SO libspdk_nvmf.so.19.0 00:02:19.047 LIB libspdk_vhost.a 00:02:19.047 SO libspdk_vhost.so.8.0 00:02:19.047 SYMLINK libspdk_nvmf.so 00:02:19.047 SYMLINK libspdk_vhost.so 00:02:19.047 LIB libspdk_iscsi.a 00:02:19.307 SO libspdk_iscsi.so.8.0 00:02:19.307 SYMLINK libspdk_iscsi.so 00:02:19.877 CC module/vfu_device/vfu_virtio.o 00:02:19.877 CC module/vfu_device/vfu_virtio_scsi.o 00:02:19.877 CC module/vfu_device/vfu_virtio_blk.o 00:02:19.877 CC module/vfu_device/vfu_virtio_rpc.o 00:02:19.877 CC module/env_dpdk/env_dpdk_rpc.o 00:02:19.877 CC module/sock/posix/posix.o 00:02:19.877 CC module/accel/dsa/accel_dsa.o 00:02:19.877 CC module/accel/ioat/accel_ioat.o 00:02:19.877 CC module/accel/dsa/accel_dsa_rpc.o 00:02:19.877 CC module/accel/ioat/accel_ioat_rpc.o 00:02:19.877 LIB libspdk_env_dpdk_rpc.a 00:02:19.877 CC module/scheduler/dynamic/scheduler_dynamic.o 00:02:19.877 CC module/accel/iaa/accel_iaa.o 00:02:19.877 CC module/accel/error/accel_error.o 00:02:19.877 CC module/accel/iaa/accel_iaa_rpc.o 00:02:19.877 CC module/accel/error/accel_error_rpc.o 00:02:19.877 CC module/scheduler/gscheduler/gscheduler.o 00:02:19.877 CC module/blob/bdev/blob_bdev.o 00:02:19.877 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:02:20.137 CC module/keyring/linux/keyring.o 00:02:20.137 SO libspdk_env_dpdk_rpc.so.6.0 00:02:20.137 CC module/keyring/linux/keyring_rpc.o 00:02:20.137 CC module/keyring/file/keyring.o 00:02:20.137 CC module/keyring/file/keyring_rpc.o 00:02:20.137 SYMLINK libspdk_env_dpdk_rpc.so 00:02:20.137 LIB libspdk_scheduler_gscheduler.a 00:02:20.137 LIB libspdk_scheduler_dpdk_governor.a 00:02:20.137 SO libspdk_scheduler_gscheduler.so.4.0 00:02:20.137 LIB libspdk_accel_error.a 00:02:20.137 LIB libspdk_keyring_linux.a 00:02:20.137 LIB libspdk_keyring_file.a 00:02:20.137 LIB libspdk_scheduler_dynamic.a 00:02:20.137 LIB libspdk_accel_ioat.a 00:02:20.137 LIB libspdk_accel_iaa.a 00:02:20.137 SO libspdk_scheduler_dpdk_governor.so.4.0 00:02:20.137 SO libspdk_keyring_linux.so.1.0 00:02:20.137 SO libspdk_keyring_file.so.1.0 00:02:20.137 SO libspdk_accel_ioat.so.6.0 00:02:20.137 SO libspdk_accel_error.so.2.0 00:02:20.137 SO libspdk_accel_iaa.so.3.0 00:02:20.137 SO libspdk_scheduler_dynamic.so.4.0 00:02:20.137 SYMLINK libspdk_scheduler_gscheduler.so 00:02:20.137 LIB libspdk_accel_dsa.a 00:02:20.137 LIB libspdk_blob_bdev.a 00:02:20.137 SO libspdk_accel_dsa.so.5.0 00:02:20.137 SYMLINK libspdk_scheduler_dpdk_governor.so 00:02:20.137 SO libspdk_blob_bdev.so.11.0 00:02:20.137 SYMLINK libspdk_accel_iaa.so 00:02:20.137 SYMLINK libspdk_keyring_linux.so 00:02:20.137 SYMLINK libspdk_accel_error.so 00:02:20.137 SYMLINK libspdk_scheduler_dynamic.so 00:02:20.398 SYMLINK libspdk_keyring_file.so 00:02:20.398 SYMLINK libspdk_accel_ioat.so 00:02:20.398 SYMLINK libspdk_accel_dsa.so 00:02:20.398 SYMLINK libspdk_blob_bdev.so 00:02:20.398 LIB libspdk_vfu_device.a 00:02:20.398 SO libspdk_vfu_device.so.3.0 00:02:20.398 SYMLINK libspdk_vfu_device.so 00:02:20.658 LIB libspdk_sock_posix.a 00:02:20.658 SO libspdk_sock_posix.so.6.0 00:02:20.658 SYMLINK libspdk_sock_posix.so 00:02:20.658 CC module/bdev/null/bdev_null.o 00:02:20.658 CC module/bdev/null/bdev_null_rpc.o 00:02:20.658 CC module/bdev/delay/vbdev_delay.o 00:02:20.658 CC module/bdev/delay/vbdev_delay_rpc.o 00:02:20.658 CC module/bdev/malloc/bdev_malloc_rpc.o 00:02:20.658 CC module/bdev/malloc/bdev_malloc.o 00:02:20.658 CC module/bdev/iscsi/bdev_iscsi.o 00:02:20.658 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:02:20.658 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:02:20.658 CC module/bdev/passthru/vbdev_passthru.o 00:02:20.658 CC module/bdev/aio/bdev_aio_rpc.o 00:02:20.658 CC module/bdev/aio/bdev_aio.o 00:02:20.658 CC module/bdev/error/vbdev_error.o 00:02:20.658 CC module/bdev/zone_block/vbdev_zone_block.o 00:02:20.658 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:02:20.658 CC module/bdev/error/vbdev_error_rpc.o 00:02:20.658 CC module/bdev/lvol/vbdev_lvol.o 00:02:20.658 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:02:20.658 CC module/bdev/split/vbdev_split.o 00:02:20.658 CC module/bdev/nvme/bdev_nvme.o 00:02:20.658 CC module/bdev/split/vbdev_split_rpc.o 00:02:20.658 CC module/bdev/nvme/bdev_nvme_rpc.o 00:02:20.658 CC module/bdev/nvme/nvme_rpc.o 00:02:20.658 CC module/bdev/nvme/bdev_mdns_client.o 00:02:20.658 CC module/bdev/gpt/gpt.o 00:02:20.658 CC module/bdev/nvme/vbdev_opal.o 00:02:20.658 CC module/bdev/gpt/vbdev_gpt.o 00:02:20.658 CC module/bdev/nvme/vbdev_opal_rpc.o 00:02:20.658 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:02:20.658 CC module/blobfs/bdev/blobfs_bdev.o 00:02:20.658 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:02:20.658 CC module/bdev/virtio/bdev_virtio_scsi.o 00:02:20.658 CC module/bdev/raid/bdev_raid.o 00:02:20.658 CC module/bdev/virtio/bdev_virtio_blk.o 00:02:20.658 CC module/bdev/raid/bdev_raid_rpc.o 00:02:20.658 CC module/bdev/virtio/bdev_virtio_rpc.o 00:02:20.658 CC module/bdev/raid/bdev_raid_sb.o 00:02:20.658 CC module/bdev/ftl/bdev_ftl.o 00:02:20.658 CC module/bdev/raid/raid0.o 00:02:20.658 CC module/bdev/ftl/bdev_ftl_rpc.o 00:02:20.658 CC module/bdev/raid/raid1.o 00:02:20.658 CC module/bdev/raid/concat.o 00:02:20.918 LIB libspdk_blobfs_bdev.a 00:02:20.918 LIB libspdk_bdev_null.a 00:02:20.918 SO libspdk_blobfs_bdev.so.6.0 00:02:20.918 LIB libspdk_bdev_split.a 00:02:20.918 SO libspdk_bdev_null.so.6.0 00:02:20.918 SO libspdk_bdev_split.so.6.0 00:02:20.918 SYMLINK libspdk_blobfs_bdev.so 00:02:21.179 SYMLINK libspdk_bdev_null.so 00:02:21.179 LIB libspdk_bdev_error.a 00:02:21.179 LIB libspdk_bdev_gpt.a 00:02:21.179 LIB libspdk_bdev_aio.a 00:02:21.179 LIB libspdk_bdev_zone_block.a 00:02:21.179 LIB libspdk_bdev_passthru.a 00:02:21.179 SYMLINK libspdk_bdev_split.so 00:02:21.179 SO libspdk_bdev_error.so.6.0 00:02:21.179 LIB libspdk_bdev_ftl.a 00:02:21.179 LIB libspdk_bdev_malloc.a 00:02:21.179 LIB libspdk_bdev_iscsi.a 00:02:21.179 LIB libspdk_bdev_delay.a 00:02:21.179 SO libspdk_bdev_aio.so.6.0 00:02:21.179 SO libspdk_bdev_gpt.so.6.0 00:02:21.179 SO libspdk_bdev_zone_block.so.6.0 00:02:21.179 SO libspdk_bdev_malloc.so.6.0 00:02:21.179 SO libspdk_bdev_passthru.so.6.0 00:02:21.179 SO libspdk_bdev_ftl.so.6.0 00:02:21.179 SO libspdk_bdev_iscsi.so.6.0 00:02:21.179 SO libspdk_bdev_delay.so.6.0 00:02:21.179 SYMLINK libspdk_bdev_error.so 00:02:21.179 SYMLINK libspdk_bdev_aio.so 00:02:21.179 SYMLINK libspdk_bdev_zone_block.so 00:02:21.179 SYMLINK libspdk_bdev_gpt.so 00:02:21.179 SYMLINK libspdk_bdev_passthru.so 00:02:21.179 SYMLINK libspdk_bdev_malloc.so 00:02:21.179 LIB libspdk_bdev_lvol.a 00:02:21.179 SYMLINK libspdk_bdev_iscsi.so 00:02:21.179 SYMLINK libspdk_bdev_ftl.so 00:02:21.179 SYMLINK libspdk_bdev_delay.so 00:02:21.179 SO libspdk_bdev_lvol.so.6.0 00:02:21.179 LIB libspdk_bdev_virtio.a 00:02:21.179 SO libspdk_bdev_virtio.so.6.0 00:02:21.439 SYMLINK libspdk_bdev_lvol.so 00:02:21.439 SYMLINK libspdk_bdev_virtio.so 00:02:21.439 LIB libspdk_bdev_raid.a 00:02:21.699 SO libspdk_bdev_raid.so.6.0 00:02:21.699 SYMLINK libspdk_bdev_raid.so 00:02:22.639 LIB libspdk_bdev_nvme.a 00:02:22.639 SO libspdk_bdev_nvme.so.7.0 00:02:22.639 SYMLINK libspdk_bdev_nvme.so 00:02:23.209 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:02:23.209 CC module/event/subsystems/iobuf/iobuf.o 00:02:23.209 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:02:23.209 CC module/event/subsystems/keyring/keyring.o 00:02:23.209 CC module/event/subsystems/sock/sock.o 00:02:23.209 CC module/event/subsystems/scheduler/scheduler.o 00:02:23.209 CC module/event/subsystems/vmd/vmd.o 00:02:23.209 CC module/event/subsystems/vmd/vmd_rpc.o 00:02:23.209 CC module/event/subsystems/vfu_tgt/vfu_tgt.o 00:02:23.209 LIB libspdk_event_vhost_blk.a 00:02:23.209 LIB libspdk_event_keyring.a 00:02:23.209 LIB libspdk_event_iobuf.a 00:02:23.209 LIB libspdk_event_vfu_tgt.a 00:02:23.209 LIB libspdk_event_vmd.a 00:02:23.209 LIB libspdk_event_sock.a 00:02:23.209 LIB libspdk_event_scheduler.a 00:02:23.209 SO libspdk_event_keyring.so.1.0 00:02:23.209 SO libspdk_event_vhost_blk.so.3.0 00:02:23.209 SO libspdk_event_vmd.so.6.0 00:02:23.209 SO libspdk_event_iobuf.so.3.0 00:02:23.209 SO libspdk_event_vfu_tgt.so.3.0 00:02:23.209 SO libspdk_event_sock.so.5.0 00:02:23.209 SO libspdk_event_scheduler.so.4.0 00:02:23.209 SYMLINK libspdk_event_vhost_blk.so 00:02:23.470 SYMLINK libspdk_event_keyring.so 00:02:23.470 SYMLINK libspdk_event_vfu_tgt.so 00:02:23.470 SYMLINK libspdk_event_vmd.so 00:02:23.470 SYMLINK libspdk_event_iobuf.so 00:02:23.470 SYMLINK libspdk_event_sock.so 00:02:23.470 SYMLINK libspdk_event_scheduler.so 00:02:23.730 CC module/event/subsystems/accel/accel.o 00:02:23.730 LIB libspdk_event_accel.a 00:02:23.730 SO libspdk_event_accel.so.6.0 00:02:23.730 SYMLINK libspdk_event_accel.so 00:02:23.991 CC module/event/subsystems/bdev/bdev.o 00:02:24.251 LIB libspdk_event_bdev.a 00:02:24.251 SO libspdk_event_bdev.so.6.0 00:02:24.251 SYMLINK libspdk_event_bdev.so 00:02:24.511 CC module/event/subsystems/scsi/scsi.o 00:02:24.772 CC module/event/subsystems/nbd/nbd.o 00:02:24.772 CC module/event/subsystems/ublk/ublk.o 00:02:24.772 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:02:24.772 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:02:24.772 LIB libspdk_event_scsi.a 00:02:24.772 LIB libspdk_event_nbd.a 00:02:24.772 LIB libspdk_event_ublk.a 00:02:24.772 SO libspdk_event_scsi.so.6.0 00:02:24.772 SO libspdk_event_nbd.so.6.0 00:02:24.772 SO libspdk_event_ublk.so.3.0 00:02:24.772 LIB libspdk_event_nvmf.a 00:02:24.772 SYMLINK libspdk_event_scsi.so 00:02:24.772 SYMLINK libspdk_event_nbd.so 00:02:24.772 SO libspdk_event_nvmf.so.6.0 00:02:24.772 SYMLINK libspdk_event_ublk.so 00:02:25.032 SYMLINK libspdk_event_nvmf.so 00:02:25.032 CC module/event/subsystems/iscsi/iscsi.o 00:02:25.032 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:02:25.292 LIB libspdk_event_iscsi.a 00:02:25.292 SO libspdk_event_iscsi.so.6.0 00:02:25.292 LIB libspdk_event_vhost_scsi.a 00:02:25.292 SO libspdk_event_vhost_scsi.so.3.0 00:02:25.292 SYMLINK libspdk_event_iscsi.so 00:02:25.292 SYMLINK libspdk_event_vhost_scsi.so 00:02:25.552 SO libspdk.so.6.0 00:02:25.552 SYMLINK libspdk.so 00:02:25.811 CC test/rpc_client/rpc_client_test.o 00:02:25.811 TEST_HEADER include/spdk/accel.h 00:02:25.811 TEST_HEADER include/spdk/barrier.h 00:02:25.811 TEST_HEADER include/spdk/accel_module.h 00:02:25.811 TEST_HEADER include/spdk/assert.h 00:02:25.811 TEST_HEADER include/spdk/bdev.h 00:02:25.811 TEST_HEADER include/spdk/base64.h 00:02:25.811 TEST_HEADER include/spdk/bdev_zone.h 00:02:25.811 TEST_HEADER include/spdk/bdev_module.h 00:02:25.811 TEST_HEADER include/spdk/bit_array.h 00:02:25.811 TEST_HEADER include/spdk/blobfs_bdev.h 00:02:25.811 CC app/trace_record/trace_record.o 00:02:25.811 TEST_HEADER include/spdk/bit_pool.h 00:02:25.811 TEST_HEADER include/spdk/blob_bdev.h 00:02:25.811 CXX app/trace/trace.o 00:02:25.811 TEST_HEADER include/spdk/blob.h 00:02:25.811 TEST_HEADER include/spdk/blobfs.h 00:02:25.811 TEST_HEADER include/spdk/conf.h 00:02:25.811 TEST_HEADER include/spdk/config.h 00:02:25.811 TEST_HEADER include/spdk/cpuset.h 00:02:25.811 TEST_HEADER include/spdk/crc32.h 00:02:25.811 TEST_HEADER include/spdk/crc16.h 00:02:25.811 TEST_HEADER include/spdk/crc64.h 00:02:25.811 TEST_HEADER include/spdk/dma.h 00:02:25.811 TEST_HEADER include/spdk/endian.h 00:02:25.811 TEST_HEADER include/spdk/dif.h 00:02:25.811 TEST_HEADER include/spdk/env.h 00:02:25.811 TEST_HEADER include/spdk/env_dpdk.h 00:02:25.811 TEST_HEADER include/spdk/event.h 00:02:25.811 TEST_HEADER include/spdk/fd_group.h 00:02:25.811 CC app/spdk_nvme_identify/identify.o 00:02:25.811 TEST_HEADER include/spdk/ftl.h 00:02:25.811 TEST_HEADER include/spdk/fd.h 00:02:25.811 TEST_HEADER include/spdk/file.h 00:02:25.811 CC app/spdk_nvme_discover/discovery_aer.o 00:02:25.811 TEST_HEADER include/spdk/hexlify.h 00:02:25.811 TEST_HEADER include/spdk/histogram_data.h 00:02:25.811 TEST_HEADER include/spdk/gpt_spec.h 00:02:25.811 CC app/spdk_lspci/spdk_lspci.o 00:02:25.811 TEST_HEADER include/spdk/idxd.h 00:02:25.811 CC app/spdk_top/spdk_top.o 00:02:25.811 TEST_HEADER include/spdk/init.h 00:02:25.811 TEST_HEADER include/spdk/ioat.h 00:02:25.811 TEST_HEADER include/spdk/ioat_spec.h 00:02:25.811 TEST_HEADER include/spdk/idxd_spec.h 00:02:25.811 TEST_HEADER include/spdk/iscsi_spec.h 00:02:25.811 TEST_HEADER include/spdk/json.h 00:02:25.811 TEST_HEADER include/spdk/jsonrpc.h 00:02:25.811 TEST_HEADER include/spdk/keyring.h 00:02:25.811 TEST_HEADER include/spdk/keyring_module.h 00:02:25.811 TEST_HEADER include/spdk/likely.h 00:02:25.811 TEST_HEADER include/spdk/log.h 00:02:25.811 TEST_HEADER include/spdk/lvol.h 00:02:25.811 TEST_HEADER include/spdk/memory.h 00:02:25.811 TEST_HEADER include/spdk/mmio.h 00:02:25.811 TEST_HEADER include/spdk/nbd.h 00:02:25.811 TEST_HEADER include/spdk/net.h 00:02:25.811 TEST_HEADER include/spdk/notify.h 00:02:25.811 TEST_HEADER include/spdk/nvme_intel.h 00:02:25.811 TEST_HEADER include/spdk/nvme.h 00:02:25.811 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:02:25.811 TEST_HEADER include/spdk/nvme_ocssd.h 00:02:25.811 TEST_HEADER include/spdk/nvme_spec.h 00:02:25.811 TEST_HEADER include/spdk/nvme_zns.h 00:02:25.811 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:02:25.811 TEST_HEADER include/spdk/nvmf_cmd.h 00:02:25.811 CC app/spdk_dd/spdk_dd.o 00:02:25.811 TEST_HEADER include/spdk/nvmf.h 00:02:25.811 CC app/spdk_nvme_perf/perf.o 00:02:25.811 TEST_HEADER include/spdk/nvmf_transport.h 00:02:25.812 TEST_HEADER include/spdk/nvmf_spec.h 00:02:25.812 TEST_HEADER include/spdk/opal.h 00:02:25.812 CC examples/interrupt_tgt/interrupt_tgt.o 00:02:25.812 TEST_HEADER include/spdk/opal_spec.h 00:02:25.812 TEST_HEADER include/spdk/pci_ids.h 00:02:25.812 TEST_HEADER include/spdk/queue.h 00:02:25.812 TEST_HEADER include/spdk/reduce.h 00:02:25.812 TEST_HEADER include/spdk/pipe.h 00:02:25.812 CC app/nvmf_tgt/nvmf_main.o 00:02:25.812 TEST_HEADER include/spdk/rpc.h 00:02:25.812 TEST_HEADER include/spdk/scheduler.h 00:02:25.812 TEST_HEADER include/spdk/scsi.h 00:02:25.812 TEST_HEADER include/spdk/sock.h 00:02:25.812 TEST_HEADER include/spdk/scsi_spec.h 00:02:25.812 TEST_HEADER include/spdk/thread.h 00:02:25.812 TEST_HEADER include/spdk/string.h 00:02:25.812 TEST_HEADER include/spdk/stdinc.h 00:02:25.812 TEST_HEADER include/spdk/trace.h 00:02:25.812 TEST_HEADER include/spdk/trace_parser.h 00:02:25.812 TEST_HEADER include/spdk/tree.h 00:02:25.812 TEST_HEADER include/spdk/util.h 00:02:25.812 TEST_HEADER include/spdk/uuid.h 00:02:25.812 TEST_HEADER include/spdk/ublk.h 00:02:25.812 TEST_HEADER include/spdk/version.h 00:02:25.812 CC app/iscsi_tgt/iscsi_tgt.o 00:02:25.812 TEST_HEADER include/spdk/vfio_user_pci.h 00:02:25.812 TEST_HEADER include/spdk/vfio_user_spec.h 00:02:25.812 TEST_HEADER include/spdk/vhost.h 00:02:25.812 TEST_HEADER include/spdk/vmd.h 00:02:25.812 TEST_HEADER include/spdk/xor.h 00:02:25.812 TEST_HEADER include/spdk/zipf.h 00:02:25.812 CXX test/cpp_headers/accel.o 00:02:25.812 CXX test/cpp_headers/accel_module.o 00:02:25.812 CXX test/cpp_headers/assert.o 00:02:25.812 CXX test/cpp_headers/base64.o 00:02:25.812 CXX test/cpp_headers/barrier.o 00:02:25.812 CXX test/cpp_headers/bdev.o 00:02:25.812 CXX test/cpp_headers/bdev_module.o 00:02:25.812 CXX test/cpp_headers/bdev_zone.o 00:02:25.812 CXX test/cpp_headers/bit_array.o 00:02:25.812 CXX test/cpp_headers/bit_pool.o 00:02:25.812 CXX test/cpp_headers/blobfs_bdev.o 00:02:25.812 CXX test/cpp_headers/blob_bdev.o 00:02:25.812 CXX test/cpp_headers/blobfs.o 00:02:25.812 CXX test/cpp_headers/conf.o 00:02:25.812 CXX test/cpp_headers/config.o 00:02:25.812 CXX test/cpp_headers/blob.o 00:02:25.812 CXX test/cpp_headers/cpuset.o 00:02:25.812 CXX test/cpp_headers/crc32.o 00:02:25.812 CXX test/cpp_headers/crc16.o 00:02:26.082 CXX test/cpp_headers/dif.o 00:02:26.082 CXX test/cpp_headers/crc64.o 00:02:26.082 CXX test/cpp_headers/dma.o 00:02:26.082 CXX test/cpp_headers/env_dpdk.o 00:02:26.082 CXX test/cpp_headers/endian.o 00:02:26.082 CXX test/cpp_headers/env.o 00:02:26.082 CXX test/cpp_headers/event.o 00:02:26.082 CXX test/cpp_headers/fd_group.o 00:02:26.082 CXX test/cpp_headers/file.o 00:02:26.082 CC app/spdk_tgt/spdk_tgt.o 00:02:26.082 CXX test/cpp_headers/fd.o 00:02:26.082 CXX test/cpp_headers/hexlify.o 00:02:26.082 CXX test/cpp_headers/gpt_spec.o 00:02:26.082 CXX test/cpp_headers/ftl.o 00:02:26.082 CXX test/cpp_headers/histogram_data.o 00:02:26.083 CXX test/cpp_headers/idxd_spec.o 00:02:26.083 CXX test/cpp_headers/idxd.o 00:02:26.083 CXX test/cpp_headers/init.o 00:02:26.083 CXX test/cpp_headers/ioat.o 00:02:26.083 CXX test/cpp_headers/ioat_spec.o 00:02:26.083 CXX test/cpp_headers/iscsi_spec.o 00:02:26.083 CXX test/cpp_headers/jsonrpc.o 00:02:26.083 CXX test/cpp_headers/json.o 00:02:26.083 CXX test/cpp_headers/keyring_module.o 00:02:26.083 CXX test/cpp_headers/keyring.o 00:02:26.083 CXX test/cpp_headers/likely.o 00:02:26.083 CXX test/cpp_headers/log.o 00:02:26.083 CXX test/cpp_headers/lvol.o 00:02:26.083 CXX test/cpp_headers/memory.o 00:02:26.083 CXX test/cpp_headers/mmio.o 00:02:26.083 CXX test/cpp_headers/nbd.o 00:02:26.083 CXX test/cpp_headers/net.o 00:02:26.083 CXX test/cpp_headers/notify.o 00:02:26.083 CXX test/cpp_headers/nvme_ocssd.o 00:02:26.083 CXX test/cpp_headers/nvme.o 00:02:26.083 CXX test/cpp_headers/nvme_intel.o 00:02:26.083 CXX test/cpp_headers/nvme_spec.o 00:02:26.083 CXX test/cpp_headers/nvmf_cmd.o 00:02:26.083 CXX test/cpp_headers/nvme_zns.o 00:02:26.083 CXX test/cpp_headers/nvme_ocssd_spec.o 00:02:26.083 CXX test/cpp_headers/nvmf.o 00:02:26.083 CXX test/cpp_headers/nvmf_fc_spec.o 00:02:26.083 CXX test/cpp_headers/nvmf_spec.o 00:02:26.083 CXX test/cpp_headers/nvmf_transport.o 00:02:26.083 CXX test/cpp_headers/opal.o 00:02:26.083 CXX test/cpp_headers/opal_spec.o 00:02:26.083 CXX test/cpp_headers/pci_ids.o 00:02:26.083 CXX test/cpp_headers/pipe.o 00:02:26.083 CXX test/cpp_headers/queue.o 00:02:26.083 CC examples/util/zipf/zipf.o 00:02:26.083 CC test/app/stub/stub.o 00:02:26.083 CXX test/cpp_headers/reduce.o 00:02:26.083 CC examples/ioat/perf/perf.o 00:02:26.083 CC test/app/histogram_perf/histogram_perf.o 00:02:26.083 CC examples/ioat/verify/verify.o 00:02:26.083 CC test/env/pci/pci_ut.o 00:02:26.083 CC test/env/memory/memory_ut.o 00:02:26.083 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:02:26.083 CXX test/cpp_headers/rpc.o 00:02:26.083 CC test/env/vtophys/vtophys.o 00:02:26.083 CC test/thread/poller_perf/poller_perf.o 00:02:26.083 CC test/app/jsoncat/jsoncat.o 00:02:26.083 CC test/dma/test_dma/test_dma.o 00:02:26.083 CC app/fio/nvme/fio_plugin.o 00:02:26.083 CC test/app/bdev_svc/bdev_svc.o 00:02:26.083 LINK spdk_lspci 00:02:26.353 LINK rpc_client_test 00:02:26.353 CC app/fio/bdev/fio_plugin.o 00:02:26.353 LINK interrupt_tgt 00:02:26.353 LINK nvmf_tgt 00:02:26.353 CC test/env/mem_callbacks/mem_callbacks.o 00:02:26.353 LINK spdk_nvme_discover 00:02:26.353 LINK spdk_trace_record 00:02:26.640 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:02:26.640 LINK zipf 00:02:26.640 LINK histogram_perf 00:02:26.640 CXX test/cpp_headers/scheduler.o 00:02:26.640 CXX test/cpp_headers/scsi.o 00:02:26.640 CXX test/cpp_headers/scsi_spec.o 00:02:26.640 CXX test/cpp_headers/sock.o 00:02:26.640 CXX test/cpp_headers/stdinc.o 00:02:26.640 CXX test/cpp_headers/string.o 00:02:26.640 LINK poller_perf 00:02:26.640 CXX test/cpp_headers/thread.o 00:02:26.640 CXX test/cpp_headers/trace.o 00:02:26.640 CXX test/cpp_headers/trace_parser.o 00:02:26.640 CXX test/cpp_headers/tree.o 00:02:26.640 CXX test/cpp_headers/ublk.o 00:02:26.640 LINK env_dpdk_post_init 00:02:26.640 CXX test/cpp_headers/util.o 00:02:26.640 CXX test/cpp_headers/uuid.o 00:02:26.640 CXX test/cpp_headers/version.o 00:02:26.640 CXX test/cpp_headers/vfio_user_pci.o 00:02:26.640 CXX test/cpp_headers/vfio_user_spec.o 00:02:26.640 CXX test/cpp_headers/vhost.o 00:02:26.640 LINK spdk_tgt 00:02:26.640 CXX test/cpp_headers/vmd.o 00:02:26.640 CXX test/cpp_headers/xor.o 00:02:26.640 CXX test/cpp_headers/zipf.o 00:02:26.640 LINK ioat_perf 00:02:26.640 LINK bdev_svc 00:02:26.640 LINK verify 00:02:26.640 LINK iscsi_tgt 00:02:26.640 LINK vtophys 00:02:26.640 LINK jsoncat 00:02:26.640 LINK spdk_dd 00:02:26.640 LINK stub 00:02:26.640 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:02:26.640 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:02:26.640 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:02:26.640 LINK spdk_trace 00:02:26.899 LINK pci_ut 00:02:26.899 LINK test_dma 00:02:26.899 CC examples/vmd/led/led.o 00:02:26.899 LINK spdk_nvme 00:02:26.899 CC examples/idxd/perf/perf.o 00:02:26.899 CC examples/sock/hello_world/hello_sock.o 00:02:26.899 CC examples/vmd/lsvmd/lsvmd.o 00:02:26.899 CC examples/thread/thread/thread_ex.o 00:02:27.159 CC test/event/reactor_perf/reactor_perf.o 00:02:27.159 LINK nvme_fuzz 00:02:27.159 CC test/event/app_repeat/app_repeat.o 00:02:27.159 CC test/event/event_perf/event_perf.o 00:02:27.159 CC test/event/reactor/reactor.o 00:02:27.159 LINK spdk_bdev 00:02:27.159 LINK led 00:02:27.159 LINK lsvmd 00:02:27.159 CC app/vhost/vhost.o 00:02:27.159 LINK vhost_fuzz 00:02:27.159 CC test/event/scheduler/scheduler.o 00:02:27.159 LINK hello_sock 00:02:27.159 LINK spdk_nvme_perf 00:02:27.159 LINK spdk_nvme_identify 00:02:27.159 LINK reactor_perf 00:02:27.159 LINK reactor 00:02:27.159 LINK spdk_top 00:02:27.159 LINK mem_callbacks 00:02:27.159 LINK idxd_perf 00:02:27.159 LINK event_perf 00:02:27.159 LINK app_repeat 00:02:27.159 LINK thread 00:02:27.418 LINK vhost 00:02:27.418 LINK scheduler 00:02:27.418 CC test/accel/dif/dif.o 00:02:27.418 LINK memory_ut 00:02:27.418 CC test/nvme/sgl/sgl.o 00:02:27.418 CC test/nvme/fdp/fdp.o 00:02:27.418 CC test/nvme/reset/reset.o 00:02:27.418 CC test/nvme/overhead/overhead.o 00:02:27.418 CC test/nvme/simple_copy/simple_copy.o 00:02:27.418 CC test/nvme/err_injection/err_injection.o 00:02:27.418 CC test/nvme/fused_ordering/fused_ordering.o 00:02:27.418 CC test/nvme/connect_stress/connect_stress.o 00:02:27.419 CC test/nvme/aer/aer.o 00:02:27.419 CC test/nvme/compliance/nvme_compliance.o 00:02:27.419 CC test/nvme/reserve/reserve.o 00:02:27.419 CC test/nvme/doorbell_aers/doorbell_aers.o 00:02:27.419 CC test/nvme/e2edp/nvme_dp.o 00:02:27.419 CC test/nvme/startup/startup.o 00:02:27.419 CC test/nvme/boot_partition/boot_partition.o 00:02:27.419 CC test/nvme/cuse/cuse.o 00:02:27.419 CC test/blobfs/mkfs/mkfs.o 00:02:27.678 CC test/lvol/esnap/esnap.o 00:02:27.678 LINK startup 00:02:27.678 LINK boot_partition 00:02:27.678 LINK connect_stress 00:02:27.678 CC examples/nvme/arbitration/arbitration.o 00:02:27.678 CC examples/nvme/nvme_manage/nvme_manage.o 00:02:27.678 LINK err_injection 00:02:27.678 CC examples/nvme/reconnect/reconnect.o 00:02:27.678 CC examples/nvme/cmb_copy/cmb_copy.o 00:02:27.678 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:02:27.678 CC examples/nvme/hotplug/hotplug.o 00:02:27.678 LINK doorbell_aers 00:02:27.678 CC examples/nvme/abort/abort.o 00:02:27.678 LINK sgl 00:02:27.678 LINK simple_copy 00:02:27.678 CC examples/nvme/hello_world/hello_world.o 00:02:27.678 LINK reserve 00:02:27.678 LINK reset 00:02:27.678 LINK fused_ordering 00:02:27.678 LINK mkfs 00:02:27.678 LINK nvme_dp 00:02:27.678 LINK aer 00:02:27.678 LINK nvme_compliance 00:02:27.678 LINK overhead 00:02:27.678 LINK fdp 00:02:27.678 CC examples/accel/perf/accel_perf.o 00:02:27.678 LINK dif 00:02:27.678 CC examples/blob/cli/blobcli.o 00:02:27.678 CC examples/blob/hello_world/hello_blob.o 00:02:27.937 LINK pmr_persistence 00:02:27.937 LINK cmb_copy 00:02:27.937 LINK hotplug 00:02:27.937 LINK hello_world 00:02:27.937 LINK arbitration 00:02:27.937 LINK reconnect 00:02:27.937 LINK abort 00:02:27.937 LINK hello_blob 00:02:27.937 LINK nvme_manage 00:02:28.195 LINK iscsi_fuzz 00:02:28.195 LINK accel_perf 00:02:28.195 LINK blobcli 00:02:28.195 CC test/bdev/bdevio/bdevio.o 00:02:28.454 LINK cuse 00:02:28.714 CC examples/bdev/hello_world/hello_bdev.o 00:02:28.714 LINK bdevio 00:02:28.714 CC examples/bdev/bdevperf/bdevperf.o 00:02:28.714 LINK hello_bdev 00:02:29.283 LINK bdevperf 00:02:29.851 CC examples/nvmf/nvmf/nvmf.o 00:02:29.851 LINK nvmf 00:02:30.849 LINK esnap 00:02:31.419 00:02:31.419 real 0m43.445s 00:02:31.419 user 6m30.439s 00:02:31.419 sys 3m26.365s 00:02:31.419 11:48:18 make -- common/autotest_common.sh@1124 -- $ xtrace_disable 00:02:31.419 11:48:18 make -- common/autotest_common.sh@10 -- $ set +x 00:02:31.419 ************************************ 00:02:31.419 END TEST make 00:02:31.419 ************************************ 00:02:31.419 11:48:18 -- common/autotest_common.sh@1142 -- $ return 0 00:02:31.419 11:48:18 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:02:31.420 11:48:18 -- pm/common@29 -- $ signal_monitor_resources TERM 00:02:31.420 11:48:18 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:02:31.420 11:48:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.420 11:48:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:02:31.420 11:48:18 -- pm/common@44 -- $ pid=37919 00:02:31.420 11:48:18 -- pm/common@50 -- $ kill -TERM 37919 00:02:31.420 11:48:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.420 11:48:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:02:31.420 11:48:18 -- pm/common@44 -- $ pid=37920 00:02:31.420 11:48:18 -- pm/common@50 -- $ kill -TERM 37920 00:02:31.420 11:48:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.420 11:48:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:02:31.420 11:48:18 -- pm/common@44 -- $ pid=37922 00:02:31.420 11:48:18 -- pm/common@50 -- $ kill -TERM 37922 00:02:31.420 11:48:18 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.420 11:48:18 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:02:31.420 11:48:18 -- pm/common@44 -- $ pid=37943 00:02:31.420 11:48:18 -- pm/common@50 -- $ sudo -E kill -TERM 37943 00:02:31.420 11:48:18 -- spdk/autotest.sh@25 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:02:31.420 11:48:18 -- nvmf/common.sh@7 -- # uname -s 00:02:31.420 11:48:18 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:02:31.420 11:48:18 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:02:31.420 11:48:18 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:02:31.420 11:48:18 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:02:31.420 11:48:18 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:02:31.420 11:48:18 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:02:31.420 11:48:18 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:02:31.420 11:48:18 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:02:31.420 11:48:18 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:02:31.420 11:48:18 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:02:31.420 11:48:18 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:02:31.420 11:48:18 -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:02:31.420 11:48:18 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:02:31.420 11:48:18 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:02:31.420 11:48:18 -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:02:31.420 11:48:18 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:02:31.420 11:48:18 -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:02:31.420 11:48:18 -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:02:31.420 11:48:18 -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:02:31.420 11:48:18 -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:02:31.420 11:48:18 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.420 11:48:18 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.420 11:48:18 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.420 11:48:18 -- paths/export.sh@5 -- # export PATH 00:02:31.420 11:48:18 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:02:31.420 11:48:18 -- nvmf/common.sh@47 -- # : 0 00:02:31.420 11:48:18 -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:02:31.420 11:48:18 -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:02:31.420 11:48:18 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:02:31.420 11:48:18 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:02:31.420 11:48:18 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:02:31.420 11:48:18 -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:02:31.420 11:48:18 -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:02:31.420 11:48:18 -- nvmf/common.sh@51 -- # have_pci_nics=0 00:02:31.420 11:48:18 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:02:31.420 11:48:18 -- spdk/autotest.sh@32 -- # uname -s 00:02:31.420 11:48:18 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:02:31.420 11:48:18 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:02:31.420 11:48:18 -- spdk/autotest.sh@34 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:31.420 11:48:18 -- spdk/autotest.sh@39 -- # echo '|/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/core-collector.sh %P %s %t' 00:02:31.420 11:48:18 -- spdk/autotest.sh@40 -- # echo /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/coredumps 00:02:31.420 11:48:18 -- spdk/autotest.sh@44 -- # modprobe nbd 00:02:31.420 11:48:18 -- spdk/autotest.sh@46 -- # type -P udevadm 00:02:31.420 11:48:18 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:02:31.420 11:48:18 -- spdk/autotest.sh@48 -- # udevadm_pid=97233 00:02:31.420 11:48:18 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:02:31.420 11:48:18 -- pm/common@17 -- # local monitor 00:02:31.420 11:48:18 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:02:31.420 11:48:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.420 11:48:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.420 11:48:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.420 11:48:18 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:02:31.420 11:48:18 -- pm/common@21 -- # date +%s 00:02:31.420 11:48:18 -- pm/common@25 -- # sleep 1 00:02:31.420 11:48:18 -- pm/common@21 -- # date +%s 00:02:31.420 11:48:18 -- pm/common@21 -- # date +%s 00:02:31.420 11:48:18 -- pm/common@21 -- # date +%s 00:02:31.420 11:48:18 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721900898 00:02:31.420 11:48:18 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721900898 00:02:31.420 11:48:18 -- pm/common@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721900898 00:02:31.420 11:48:18 -- pm/common@21 -- # sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autotest.sh.1721900898 00:02:31.420 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721900898_collect-vmstat.pm.log 00:02:31.420 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721900898_collect-cpu-load.pm.log 00:02:31.420 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721900898_collect-cpu-temp.pm.log 00:02:31.420 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autotest.sh.1721900898_collect-bmc-pm.bmc.pm.log 00:02:32.361 11:48:19 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:02:32.621 11:48:19 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:02:32.621 11:48:19 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:32.621 11:48:19 -- common/autotest_common.sh@10 -- # set +x 00:02:32.621 11:48:19 -- spdk/autotest.sh@59 -- # create_test_list 00:02:32.621 11:48:19 -- common/autotest_common.sh@746 -- # xtrace_disable 00:02:32.621 11:48:19 -- common/autotest_common.sh@10 -- # set +x 00:02:32.621 11:48:19 -- spdk/autotest.sh@61 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/autotest.sh 00:02:32.621 11:48:19 -- spdk/autotest.sh@61 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.621 11:48:19 -- spdk/autotest.sh@61 -- # src=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.621 11:48:19 -- spdk/autotest.sh@62 -- # out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:02:32.621 11:48:19 -- spdk/autotest.sh@63 -- # cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:02:32.621 11:48:19 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:02:32.621 11:48:19 -- common/autotest_common.sh@1455 -- # uname 00:02:32.621 11:48:19 -- common/autotest_common.sh@1455 -- # '[' Linux = FreeBSD ']' 00:02:32.621 11:48:19 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:02:32.621 11:48:19 -- common/autotest_common.sh@1475 -- # uname 00:02:32.621 11:48:19 -- common/autotest_common.sh@1475 -- # [[ Linux = FreeBSD ]] 00:02:32.621 11:48:19 -- spdk/autotest.sh@71 -- # grep CC_TYPE mk/cc.mk 00:02:32.621 11:48:19 -- spdk/autotest.sh@71 -- # CC_TYPE=CC_TYPE=gcc 00:02:32.621 11:48:19 -- spdk/autotest.sh@72 -- # hash lcov 00:02:32.621 11:48:19 -- spdk/autotest.sh@72 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:02:32.621 11:48:19 -- spdk/autotest.sh@80 -- # export 'LCOV_OPTS= 00:02:32.621 --rc lcov_branch_coverage=1 00:02:32.621 --rc lcov_function_coverage=1 00:02:32.621 --rc genhtml_branch_coverage=1 00:02:32.621 --rc genhtml_function_coverage=1 00:02:32.621 --rc genhtml_legend=1 00:02:32.621 --rc geninfo_all_blocks=1 00:02:32.621 ' 00:02:32.621 11:48:19 -- spdk/autotest.sh@80 -- # LCOV_OPTS=' 00:02:32.621 --rc lcov_branch_coverage=1 00:02:32.621 --rc lcov_function_coverage=1 00:02:32.621 --rc genhtml_branch_coverage=1 00:02:32.621 --rc genhtml_function_coverage=1 00:02:32.621 --rc genhtml_legend=1 00:02:32.621 --rc geninfo_all_blocks=1 00:02:32.621 ' 00:02:32.621 11:48:19 -- spdk/autotest.sh@81 -- # export 'LCOV=lcov 00:02:32.621 --rc lcov_branch_coverage=1 00:02:32.621 --rc lcov_function_coverage=1 00:02:32.621 --rc genhtml_branch_coverage=1 00:02:32.621 --rc genhtml_function_coverage=1 00:02:32.621 --rc genhtml_legend=1 00:02:32.621 --rc geninfo_all_blocks=1 00:02:32.621 --no-external' 00:02:32.621 11:48:19 -- spdk/autotest.sh@81 -- # LCOV='lcov 00:02:32.621 --rc lcov_branch_coverage=1 00:02:32.621 --rc lcov_function_coverage=1 00:02:32.621 --rc genhtml_branch_coverage=1 00:02:32.621 --rc genhtml_function_coverage=1 00:02:32.621 --rc genhtml_legend=1 00:02:32.621 --rc geninfo_all_blocks=1 00:02:32.621 --no-external' 00:02:32.621 11:48:19 -- spdk/autotest.sh@83 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -v 00:02:32.621 lcov: LCOV version 1.14 00:02:32.621 11:48:19 -- spdk/autotest.sh@85 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -i -t Baseline -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info 00:02:42.605 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:02:42.605 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/nvme/nvme_stubs.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/assert.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/accel_module.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_module.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/barrier.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/base64.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_pool.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bdev_zone.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob_bdev.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blobfs_bdev.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc32.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/config.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/bit_array.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc16.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/cpuset.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dma.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/conf.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/blob.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/endian.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/dif.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env_dpdk.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/env.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/crc64.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd_group.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/event.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/hexlify.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/histogram_data.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/file.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/fd.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/gpt_spec.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd_spec.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/init.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat_spec.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ftl.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/jsonrpc.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/iscsi_spec.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/idxd.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ioat.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/keyring_module.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/likely.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/json.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/log.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/mmio.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/lvol.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nbd.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/memory.gcno 00:02:52.582 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno:no functions found 00:02:52.582 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/net.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_spec.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_ocssd_spec.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_zns.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_fc_spec.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/notify.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme_intel.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvme.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_cmd.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_transport.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/nvmf_spec.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/opal_spec.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pci_ids.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/queue.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/pipe.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/reduce.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/rpc.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scheduler.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/scsi_spec.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/sock.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/stdinc.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/string.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/thread.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/tree.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/trace_parser.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/ublk.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/uuid.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/util.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/version.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_pci.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vfio_user_spec.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vhost.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/vmd.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/xor.gcno 00:02:52.583 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno:no functions found 00:02:52.583 geninfo: WARNING: GCOV did not produce any data for /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/cpp_headers/zipf.gcno 00:02:54.487 11:48:41 -- spdk/autotest.sh@89 -- # timing_enter pre_cleanup 00:02:54.487 11:48:41 -- common/autotest_common.sh@722 -- # xtrace_disable 00:02:54.487 11:48:41 -- common/autotest_common.sh@10 -- # set +x 00:02:54.487 11:48:41 -- spdk/autotest.sh@91 -- # rm -f 00:02:54.487 11:48:41 -- spdk/autotest.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:02:57.059 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:02:57.059 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:02:57.059 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:02:57.059 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:02:57.059 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:02:57.059 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:02:57.059 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:02:57.059 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:02:57.059 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:02:57.059 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:02:57.059 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:02:57.059 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:02:57.059 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:02:57.059 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:02:57.059 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:02:57.059 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:02:57.059 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:02:57.059 11:48:44 -- spdk/autotest.sh@96 -- # get_zoned_devs 00:02:57.059 11:48:44 -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:02:57.059 11:48:44 -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:02:57.059 11:48:44 -- common/autotest_common.sh@1670 -- # local nvme bdf 00:02:57.059 11:48:44 -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:02:57.059 11:48:44 -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:02:57.059 11:48:44 -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:02:57.059 11:48:44 -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:02:57.059 11:48:44 -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:02:57.059 11:48:44 -- spdk/autotest.sh@98 -- # (( 0 > 0 )) 00:02:57.059 11:48:44 -- spdk/autotest.sh@110 -- # for dev in /dev/nvme*n!(*p*) 00:02:57.059 11:48:44 -- spdk/autotest.sh@112 -- # [[ -z '' ]] 00:02:57.059 11:48:44 -- spdk/autotest.sh@113 -- # block_in_use /dev/nvme0n1 00:02:57.059 11:48:44 -- scripts/common.sh@378 -- # local block=/dev/nvme0n1 pt 00:02:57.059 11:48:44 -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:02:57.059 No valid GPT data, bailing 00:02:57.059 11:48:44 -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:02:57.059 11:48:44 -- scripts/common.sh@391 -- # pt= 00:02:57.059 11:48:44 -- scripts/common.sh@392 -- # return 1 00:02:57.059 11:48:44 -- spdk/autotest.sh@114 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:02:57.059 1+0 records in 00:02:57.059 1+0 records out 00:02:57.059 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00234997 s, 446 MB/s 00:02:57.059 11:48:44 -- spdk/autotest.sh@118 -- # sync 00:02:57.059 11:48:44 -- spdk/autotest.sh@120 -- # xtrace_disable_per_cmd reap_spdk_processes 00:02:57.059 11:48:44 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:02:57.059 11:48:44 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:02.339 11:48:48 -- spdk/autotest.sh@124 -- # uname -s 00:03:02.339 11:48:48 -- spdk/autotest.sh@124 -- # '[' Linux = Linux ']' 00:03:02.339 11:48:48 -- spdk/autotest.sh@125 -- # run_test setup.sh /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:02.339 11:48:48 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:02.339 11:48:48 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.339 11:48:48 -- common/autotest_common.sh@10 -- # set +x 00:03:02.339 ************************************ 00:03:02.339 START TEST setup.sh 00:03:02.339 ************************************ 00:03:02.339 11:48:48 setup.sh -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/test-setup.sh 00:03:02.339 * Looking for test storage... 00:03:02.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:02.339 11:48:48 setup.sh -- setup/test-setup.sh@10 -- # uname -s 00:03:02.339 11:48:48 setup.sh -- setup/test-setup.sh@10 -- # [[ Linux == Linux ]] 00:03:02.339 11:48:48 setup.sh -- setup/test-setup.sh@12 -- # run_test acl /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:02.339 11:48:48 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:02.339 11:48:48 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:02.339 11:48:48 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:02.339 ************************************ 00:03:02.339 START TEST acl 00:03:02.339 ************************************ 00:03:02.339 11:48:48 setup.sh.acl -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/acl.sh 00:03:02.339 * Looking for test storage... 00:03:02.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:02.339 11:48:48 setup.sh.acl -- setup/acl.sh@10 -- # get_zoned_devs 00:03:02.339 11:48:48 setup.sh.acl -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:02.339 11:48:48 setup.sh.acl -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:02.339 11:48:48 setup.sh.acl -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:02.339 11:48:48 setup.sh.acl -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:02.339 11:48:48 setup.sh.acl -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:02.339 11:48:48 setup.sh.acl -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:02.339 11:48:48 setup.sh.acl -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:02.339 11:48:48 setup.sh.acl -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:02.339 11:48:48 setup.sh.acl -- setup/acl.sh@12 -- # devs=() 00:03:02.339 11:48:48 setup.sh.acl -- setup/acl.sh@12 -- # declare -a devs 00:03:02.339 11:48:48 setup.sh.acl -- setup/acl.sh@13 -- # drivers=() 00:03:02.339 11:48:48 setup.sh.acl -- setup/acl.sh@13 -- # declare -A drivers 00:03:02.339 11:48:48 setup.sh.acl -- setup/acl.sh@51 -- # setup reset 00:03:02.339 11:48:48 setup.sh.acl -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:02.339 11:48:48 setup.sh.acl -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:04.880 11:48:51 setup.sh.acl -- setup/acl.sh@52 -- # collect_setup_devs 00:03:04.880 11:48:51 setup.sh.acl -- setup/acl.sh@16 -- # local dev driver 00:03:04.880 11:48:51 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:04.880 11:48:51 setup.sh.acl -- setup/acl.sh@15 -- # setup output status 00:03:04.880 11:48:51 setup.sh.acl -- setup/common.sh@9 -- # [[ output == output ]] 00:03:04.880 11:48:51 setup.sh.acl -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:03:07.419 Hugepages 00:03:07.419 node hugesize free / total 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 1048576kB == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 00:03:07.419 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 2048kB == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.0 == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.1 == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.2 == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.3 == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.4 == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.5 == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.6 == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:00:04.7 == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:5e:00.0 == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ nvme == nvme ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@21 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@22 -- # devs+=("$dev") 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@22 -- # drivers["$dev"]=nvme 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.0 == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.1 == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.2 == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.3 == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.4 == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.5 == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.6 == *:*:*.* ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.419 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.420 11:48:54 setup.sh.acl -- setup/acl.sh@19 -- # [[ 0000:80:04.7 == *:*:*.* ]] 00:03:07.420 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # [[ ioatdma == nvme ]] 00:03:07.420 11:48:54 setup.sh.acl -- setup/acl.sh@20 -- # continue 00:03:07.420 11:48:54 setup.sh.acl -- setup/acl.sh@18 -- # read -r _ dev _ _ _ driver _ 00:03:07.420 11:48:54 setup.sh.acl -- setup/acl.sh@24 -- # (( 1 > 0 )) 00:03:07.420 11:48:54 setup.sh.acl -- setup/acl.sh@54 -- # run_test denied denied 00:03:07.420 11:48:54 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:07.420 11:48:54 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:07.420 11:48:54 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:07.420 ************************************ 00:03:07.420 START TEST denied 00:03:07.420 ************************************ 00:03:07.420 11:48:54 setup.sh.acl.denied -- common/autotest_common.sh@1123 -- # denied 00:03:07.420 11:48:54 setup.sh.acl.denied -- setup/acl.sh@38 -- # PCI_BLOCKED=' 0000:5e:00.0' 00:03:07.420 11:48:54 setup.sh.acl.denied -- setup/acl.sh@39 -- # grep 'Skipping denied controller at 0000:5e:00.0' 00:03:07.420 11:48:54 setup.sh.acl.denied -- setup/acl.sh@38 -- # setup output config 00:03:07.420 11:48:54 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ output == output ]] 00:03:07.420 11:48:54 setup.sh.acl.denied -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:09.955 0000:5e:00.0 (8086 0a54): Skipping denied controller at 0000:5e:00.0 00:03:09.955 11:48:57 setup.sh.acl.denied -- setup/acl.sh@40 -- # verify 0000:5e:00.0 00:03:09.955 11:48:57 setup.sh.acl.denied -- setup/acl.sh@28 -- # local dev driver 00:03:09.955 11:48:57 setup.sh.acl.denied -- setup/acl.sh@30 -- # for dev in "$@" 00:03:09.955 11:48:57 setup.sh.acl.denied -- setup/acl.sh@31 -- # [[ -e /sys/bus/pci/devices/0000:5e:00.0 ]] 00:03:09.955 11:48:57 setup.sh.acl.denied -- setup/acl.sh@32 -- # readlink -f /sys/bus/pci/devices/0000:5e:00.0/driver 00:03:09.955 11:48:57 setup.sh.acl.denied -- setup/acl.sh@32 -- # driver=/sys/bus/pci/drivers/nvme 00:03:09.955 11:48:57 setup.sh.acl.denied -- setup/acl.sh@33 -- # [[ nvme == \n\v\m\e ]] 00:03:09.955 11:48:57 setup.sh.acl.denied -- setup/acl.sh@41 -- # setup reset 00:03:09.955 11:48:57 setup.sh.acl.denied -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:09.955 11:48:57 setup.sh.acl.denied -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:14.149 00:03:14.149 real 0m6.633s 00:03:14.149 user 0m2.147s 00:03:14.149 sys 0m3.832s 00:03:14.149 11:49:01 setup.sh.acl.denied -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:14.149 11:49:01 setup.sh.acl.denied -- common/autotest_common.sh@10 -- # set +x 00:03:14.149 ************************************ 00:03:14.149 END TEST denied 00:03:14.149 ************************************ 00:03:14.149 11:49:01 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:14.149 11:49:01 setup.sh.acl -- setup/acl.sh@55 -- # run_test allowed allowed 00:03:14.149 11:49:01 setup.sh.acl -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:14.149 11:49:01 setup.sh.acl -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:14.149 11:49:01 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:14.149 ************************************ 00:03:14.149 START TEST allowed 00:03:14.149 ************************************ 00:03:14.149 11:49:01 setup.sh.acl.allowed -- common/autotest_common.sh@1123 -- # allowed 00:03:14.149 11:49:01 setup.sh.acl.allowed -- setup/acl.sh@46 -- # grep -E '0000:5e:00.0 .*: nvme -> .*' 00:03:14.149 11:49:01 setup.sh.acl.allowed -- setup/acl.sh@45 -- # PCI_ALLOWED=0000:5e:00.0 00:03:14.149 11:49:01 setup.sh.acl.allowed -- setup/acl.sh@45 -- # setup output config 00:03:14.149 11:49:01 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ output == output ]] 00:03:14.149 11:49:01 setup.sh.acl.allowed -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:17.439 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:17.439 11:49:04 setup.sh.acl.allowed -- setup/acl.sh@47 -- # verify 00:03:17.439 11:49:04 setup.sh.acl.allowed -- setup/acl.sh@28 -- # local dev driver 00:03:17.440 11:49:04 setup.sh.acl.allowed -- setup/acl.sh@48 -- # setup reset 00:03:17.440 11:49:04 setup.sh.acl.allowed -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:17.440 11:49:04 setup.sh.acl.allowed -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:20.738 00:03:20.738 real 0m6.103s 00:03:20.738 user 0m1.738s 00:03:20.738 sys 0m3.398s 00:03:20.738 11:49:07 setup.sh.acl.allowed -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.738 11:49:07 setup.sh.acl.allowed -- common/autotest_common.sh@10 -- # set +x 00:03:20.738 ************************************ 00:03:20.738 END TEST allowed 00:03:20.738 ************************************ 00:03:20.738 11:49:07 setup.sh.acl -- common/autotest_common.sh@1142 -- # return 0 00:03:20.738 00:03:20.738 real 0m18.525s 00:03:20.738 user 0m5.987s 00:03:20.738 sys 0m11.050s 00:03:20.738 11:49:07 setup.sh.acl -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:20.738 11:49:07 setup.sh.acl -- common/autotest_common.sh@10 -- # set +x 00:03:20.738 ************************************ 00:03:20.738 END TEST acl 00:03:20.738 ************************************ 00:03:20.738 11:49:07 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:20.738 11:49:07 setup.sh -- setup/test-setup.sh@13 -- # run_test hugepages /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:20.738 11:49:07 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.738 11:49:07 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.738 11:49:07 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:20.738 ************************************ 00:03:20.738 START TEST hugepages 00:03:20.738 ************************************ 00:03:20.738 11:49:07 setup.sh.hugepages -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/hugepages.sh 00:03:20.738 * Looking for test storage... 00:03:20.738 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@10 -- # nodes_sys=() 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@10 -- # declare -a nodes_sys 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@12 -- # declare -i default_hugepages=0 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@13 -- # declare -i no_nodes=0 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@14 -- # declare -i nr_hugepages=0 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@16 -- # get_meminfo Hugepagesize 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@17 -- # local get=Hugepagesize 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@18 -- # local node= 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@19 -- # local var val 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@20 -- # local mem_f mem 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@28 -- # mapfile -t mem 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 168152708 kB' 'MemAvailable: 171387180 kB' 'Buffers: 3896 kB' 'Cached: 14736184 kB' 'SwapCached: 0 kB' 'Active: 11596428 kB' 'Inactive: 3694312 kB' 'Active(anon): 11178472 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 553996 kB' 'Mapped: 212716 kB' 'Shmem: 10627812 kB' 'KReclaimable: 532840 kB' 'Slab: 1186948 kB' 'SReclaimable: 532840 kB' 'SUnreclaim: 654108 kB' 'KernelStack: 20832 kB' 'PageTables: 9436 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 101982020 kB' 'Committed_AS: 12720616 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 2048' 'HugePages_Free: 2048' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 4194304 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.738 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.739 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # continue 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # IFS=': ' 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/common.sh@31 -- # read -r var val _ 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/common.sh@32 -- # [[ Hugepagesize == \H\u\g\e\p\a\g\e\s\i\z\e ]] 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/common.sh@33 -- # echo 2048 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/common.sh@33 -- # return 0 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@16 -- # default_hugepages=2048 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@17 -- # default_huge_nr=/sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@18 -- # global_huge_nr=/proc/sys/vm/nr_hugepages 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@21 -- # unset -v HUGE_EVEN_ALLOC 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@22 -- # unset -v HUGEMEM 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@23 -- # unset -v HUGENODE 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@24 -- # unset -v NRHUGE 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@207 -- # get_nodes 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@27 -- # local node 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=2048 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@208 -- # clear_hp 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:20.740 11:49:07 setup.sh.hugepages -- setup/hugepages.sh@210 -- # run_test default_setup default_setup 00:03:20.740 11:49:07 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:20.740 11:49:07 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:20.740 11:49:07 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:20.740 ************************************ 00:03:20.740 START TEST default_setup 00:03:20.740 ************************************ 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1123 -- # default_setup 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@136 -- # get_test_nr_hugepages 2097152 0 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@49 -- # local size=2097152 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@51 -- # shift 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@52 -- # local node_ids 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@62 -- # local user_nodes 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@73 -- # return 0 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/hugepages.sh@137 -- # setup output 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/common.sh@9 -- # [[ output == output ]] 00:03:20.740 11:49:07 setup.sh.hugepages.default_setup -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:22.679 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:22.679 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:22.679 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:22.679 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:22.679 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:22.679 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:22.679 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:22.679 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:22.679 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:03:22.679 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:03:22.679 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:03:22.679 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:03:22.939 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:03:22.939 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:03:22.939 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:03:22.939 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:03:23.882 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@138 -- # verify_nr_hugepages 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@89 -- # local node 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@90 -- # local sorted_t 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@91 -- # local sorted_s 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@92 -- # local surp 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@93 -- # local resv 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@94 -- # local anon 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170314828 kB' 'MemAvailable: 173549284 kB' 'Buffers: 3896 kB' 'Cached: 14736292 kB' 'SwapCached: 0 kB' 'Active: 11615764 kB' 'Inactive: 3694312 kB' 'Active(anon): 11197808 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572280 kB' 'Mapped: 212876 kB' 'Shmem: 10627920 kB' 'KReclaimable: 532808 kB' 'Slab: 1185128 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 652320 kB' 'KernelStack: 20592 kB' 'PageTables: 9636 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12746384 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317144 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.882 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@97 -- # anon=0 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.883 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170319384 kB' 'MemAvailable: 173553840 kB' 'Buffers: 3896 kB' 'Cached: 14736292 kB' 'SwapCached: 0 kB' 'Active: 11614484 kB' 'Inactive: 3694312 kB' 'Active(anon): 11196528 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571892 kB' 'Mapped: 212844 kB' 'Shmem: 10627920 kB' 'KReclaimable: 532808 kB' 'Slab: 1185160 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 652352 kB' 'KernelStack: 20592 kB' 'PageTables: 9620 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12744912 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317160 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.884 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@99 -- # surp=0 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170319508 kB' 'MemAvailable: 173553964 kB' 'Buffers: 3896 kB' 'Cached: 14736312 kB' 'SwapCached: 0 kB' 'Active: 11615000 kB' 'Inactive: 3694312 kB' 'Active(anon): 11197044 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572372 kB' 'Mapped: 212848 kB' 'Shmem: 10627940 kB' 'KReclaimable: 532808 kB' 'Slab: 1185112 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 652304 kB' 'KernelStack: 20736 kB' 'PageTables: 9900 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12746424 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:10 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.885 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.886 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@100 -- # resv=0 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:23.887 nr_hugepages=1024 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:23.887 resv_hugepages=0 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:23.887 surplus_hugepages=0 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:23.887 anon_hugepages=0 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node= 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170318560 kB' 'MemAvailable: 173553016 kB' 'Buffers: 3896 kB' 'Cached: 14736336 kB' 'SwapCached: 0 kB' 'Active: 11615436 kB' 'Inactive: 3694312 kB' 'Active(anon): 11197480 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572784 kB' 'Mapped: 212848 kB' 'Shmem: 10627964 kB' 'KReclaimable: 532808 kB' 'Slab: 1185112 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 652304 kB' 'KernelStack: 20832 kB' 'PageTables: 9948 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12746448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317192 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.887 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 1024 00:03:23.888 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@112 -- # get_nodes 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@27 -- # local node 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@18 -- # local node=0 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@19 -- # local var val 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@20 -- # local mem_f mem 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@28 -- # mapfile -t mem 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91908592 kB' 'MemUsed: 5707036 kB' 'SwapCached: 0 kB' 'Active: 1999308 kB' 'Inactive: 216924 kB' 'Active(anon): 1837484 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2055960 kB' 'Mapped: 122836 kB' 'AnonPages: 163400 kB' 'Shmem: 1677212 kB' 'KernelStack: 11912 kB' 'PageTables: 5300 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347008 kB' 'Slab: 654868 kB' 'SReclaimable: 347008 kB' 'SUnreclaim: 307860 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.889 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # continue 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # IFS=': ' 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@31 -- # read -r var val _ 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # echo 0 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/common.sh@33 -- # return 0 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:23.890 node0=1024 expecting 1024 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:23.890 00:03:23.890 real 0m3.558s 00:03:23.890 user 0m1.039s 00:03:23.890 sys 0m1.624s 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:23.890 11:49:11 setup.sh.hugepages.default_setup -- common/autotest_common.sh@10 -- # set +x 00:03:23.890 ************************************ 00:03:23.890 END TEST default_setup 00:03:23.890 ************************************ 00:03:24.149 11:49:11 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:24.150 11:49:11 setup.sh.hugepages -- setup/hugepages.sh@211 -- # run_test per_node_1G_alloc per_node_1G_alloc 00:03:24.150 11:49:11 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:24.150 11:49:11 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:24.150 11:49:11 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:24.150 ************************************ 00:03:24.150 START TEST per_node_1G_alloc 00:03:24.150 ************************************ 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1123 -- # per_node_1G_alloc 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@143 -- # local IFS=, 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@145 -- # get_test_nr_hugepages 1048576 0 1 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@50 -- # (( 3 > 1 )) 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@51 -- # shift 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # node_ids=('0' '1') 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 1 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0' '1') 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@69 -- # (( 2 > 0 )) 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=512 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # NRHUGE=512 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # HUGENODE=0,1 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@146 -- # setup output 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:24.150 11:49:11 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:26.059 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.059 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:26.059 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.059 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.059 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.059 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.059 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.059 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.059 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.059 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:26.059 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:26.059 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:26.059 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:26.059 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:26.059 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:26.059 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:26.059 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # nr_hugepages=1024 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@147 -- # verify_nr_hugepages 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170306888 kB' 'MemAvailable: 173541344 kB' 'Buffers: 3896 kB' 'Cached: 14736420 kB' 'SwapCached: 0 kB' 'Active: 11616272 kB' 'Inactive: 3694312 kB' 'Active(anon): 11198316 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 573052 kB' 'Mapped: 213000 kB' 'Shmem: 10628048 kB' 'KReclaimable: 532808 kB' 'Slab: 1186072 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 653264 kB' 'KernelStack: 20640 kB' 'PageTables: 9524 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12744160 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317240 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.322 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.323 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170306944 kB' 'MemAvailable: 173541400 kB' 'Buffers: 3896 kB' 'Cached: 14736424 kB' 'SwapCached: 0 kB' 'Active: 11614968 kB' 'Inactive: 3694312 kB' 'Active(anon): 11197012 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572196 kB' 'Mapped: 212856 kB' 'Shmem: 10628052 kB' 'KReclaimable: 532808 kB' 'Slab: 1186056 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 653248 kB' 'KernelStack: 20592 kB' 'PageTables: 9360 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12744180 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317192 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.324 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.325 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170308260 kB' 'MemAvailable: 173542716 kB' 'Buffers: 3896 kB' 'Cached: 14736444 kB' 'SwapCached: 0 kB' 'Active: 11614928 kB' 'Inactive: 3694312 kB' 'Active(anon): 11196972 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572148 kB' 'Mapped: 212856 kB' 'Shmem: 10628072 kB' 'KReclaimable: 532808 kB' 'Slab: 1186056 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 653248 kB' 'KernelStack: 20560 kB' 'PageTables: 9264 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12744204 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317224 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.326 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.327 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:26.328 nr_hugepages=1024 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:26.328 resv_hugepages=0 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:26.328 surplus_hugepages=0 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:26.328 anon_hugepages=0 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node= 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.328 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170309104 kB' 'MemAvailable: 173543560 kB' 'Buffers: 3896 kB' 'Cached: 14736464 kB' 'SwapCached: 0 kB' 'Active: 11614940 kB' 'Inactive: 3694312 kB' 'Active(anon): 11196984 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572216 kB' 'Mapped: 212856 kB' 'Shmem: 10628092 kB' 'KReclaimable: 532808 kB' 'Slab: 1186056 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 653248 kB' 'KernelStack: 20592 kB' 'PageTables: 9352 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12744224 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317192 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.329 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=0 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.330 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92965420 kB' 'MemUsed: 4650208 kB' 'SwapCached: 0 kB' 'Active: 1997764 kB' 'Inactive: 216924 kB' 'Active(anon): 1835940 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2055960 kB' 'Mapped: 122844 kB' 'AnonPages: 161880 kB' 'Shmem: 1677212 kB' 'KernelStack: 11560 kB' 'PageTables: 4012 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347008 kB' 'Slab: 655432 kB' 'SReclaimable: 347008 kB' 'SUnreclaim: 308424 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.331 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@18 -- # local node=1 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@19 -- # local var val 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77345016 kB' 'MemUsed: 16420492 kB' 'SwapCached: 0 kB' 'Active: 9617256 kB' 'Inactive: 3477388 kB' 'Active(anon): 9361124 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12684444 kB' 'Mapped: 90012 kB' 'AnonPages: 410320 kB' 'Shmem: 8950924 kB' 'KernelStack: 9032 kB' 'PageTables: 5340 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185800 kB' 'Slab: 530624 kB' 'SReclaimable: 185800 kB' 'SUnreclaim: 344824 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.332 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.333 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # continue 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # echo 0 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/common.sh@33 -- # return 0 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:26.334 node0=512 expecting 512 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:26.334 node1=512 expecting 512 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:26.334 00:03:26.334 real 0m2.342s 00:03:26.334 user 0m0.809s 00:03:26.334 sys 0m1.360s 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:26.334 11:49:13 setup.sh.hugepages.per_node_1G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:26.334 ************************************ 00:03:26.334 END TEST per_node_1G_alloc 00:03:26.334 ************************************ 00:03:26.334 11:49:13 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:26.334 11:49:13 setup.sh.hugepages -- setup/hugepages.sh@212 -- # run_test even_2G_alloc even_2G_alloc 00:03:26.334 11:49:13 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:26.334 11:49:13 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:26.334 11:49:13 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:26.594 ************************************ 00:03:26.594 START TEST even_2G_alloc 00:03:26.594 ************************************ 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1123 -- # even_2G_alloc 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@152 -- # get_test_nr_hugepages 2097152 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 512 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # NRHUGE=1024 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # HUGE_EVEN_ALLOC=yes 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@153 -- # setup output 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:26.594 11:49:13 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:29.139 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.139 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:29.139 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.139 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.139 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.139 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.139 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.139 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.139 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.139 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:29.139 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:29.139 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:29.139 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:29.139 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:29.139 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:29.139 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:29.139 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@154 -- # verify_nr_hugepages 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@89 -- # local node 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170339380 kB' 'MemAvailable: 173573836 kB' 'Buffers: 3896 kB' 'Cached: 14736580 kB' 'SwapCached: 0 kB' 'Active: 11610196 kB' 'Inactive: 3694312 kB' 'Active(anon): 11192240 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567216 kB' 'Mapped: 211788 kB' 'Shmem: 10628208 kB' 'KReclaimable: 532808 kB' 'Slab: 1186312 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 653504 kB' 'KernelStack: 20448 kB' 'PageTables: 8720 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12721220 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317144 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.139 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.140 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170335760 kB' 'MemAvailable: 173570216 kB' 'Buffers: 3896 kB' 'Cached: 14736584 kB' 'SwapCached: 0 kB' 'Active: 11614060 kB' 'Inactive: 3694312 kB' 'Active(anon): 11196104 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571128 kB' 'Mapped: 212056 kB' 'Shmem: 10628212 kB' 'KReclaimable: 532808 kB' 'Slab: 1186408 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 653600 kB' 'KernelStack: 20496 kB' 'PageTables: 8868 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12725064 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317096 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.141 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.142 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170331728 kB' 'MemAvailable: 173566184 kB' 'Buffers: 3896 kB' 'Cached: 14736600 kB' 'SwapCached: 0 kB' 'Active: 11615624 kB' 'Inactive: 3694312 kB' 'Active(anon): 11197668 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 572708 kB' 'Mapped: 212576 kB' 'Shmem: 10628228 kB' 'KReclaimable: 532808 kB' 'Slab: 1186408 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 653600 kB' 'KernelStack: 20480 kB' 'PageTables: 8840 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12726416 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317100 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.143 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.144 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:29.145 nr_hugepages=1024 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:29.145 resv_hugepages=0 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:29.145 surplus_hugepages=0 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:29.145 anon_hugepages=0 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node= 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170332932 kB' 'MemAvailable: 173567388 kB' 'Buffers: 3896 kB' 'Cached: 14736624 kB' 'SwapCached: 0 kB' 'Active: 11610252 kB' 'Inactive: 3694312 kB' 'Active(anon): 11192296 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567272 kB' 'Mapped: 212136 kB' 'Shmem: 10628252 kB' 'KReclaimable: 532808 kB' 'Slab: 1186408 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 653600 kB' 'KernelStack: 20512 kB' 'PageTables: 8936 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12720320 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317096 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.145 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.146 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 1024 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@27 -- # local node 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=0 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92969576 kB' 'MemUsed: 4646052 kB' 'SwapCached: 0 kB' 'Active: 1996340 kB' 'Inactive: 216924 kB' 'Active(anon): 1834516 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2056032 kB' 'Mapped: 122500 kB' 'AnonPages: 160348 kB' 'Shmem: 1677284 kB' 'KernelStack: 11560 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347008 kB' 'Slab: 655376 kB' 'SReclaimable: 347008 kB' 'SUnreclaim: 308368 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.147 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@18 -- # local node=1 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@19 -- # local var val 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77362600 kB' 'MemUsed: 16402908 kB' 'SwapCached: 0 kB' 'Active: 9614080 kB' 'Inactive: 3477388 kB' 'Active(anon): 9357948 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12684508 kB' 'Mapped: 89288 kB' 'AnonPages: 407064 kB' 'Shmem: 8950988 kB' 'KernelStack: 8952 kB' 'PageTables: 4924 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185800 kB' 'Slab: 531032 kB' 'SReclaimable: 185800 kB' 'SUnreclaim: 345232 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.148 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.149 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # continue 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # echo 0 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/common.sh@33 -- # return 0 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:29.150 node0=512 expecting 512 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@128 -- # echo 'node1=512 expecting 512' 00:03:29.150 node1=512 expecting 512 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- setup/hugepages.sh@130 -- # [[ 512 == \5\1\2 ]] 00:03:29.150 00:03:29.150 real 0m2.810s 00:03:29.150 user 0m1.154s 00:03:29.150 sys 0m1.726s 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:29.150 11:49:16 setup.sh.hugepages.even_2G_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:29.150 ************************************ 00:03:29.150 END TEST even_2G_alloc 00:03:29.150 ************************************ 00:03:29.410 11:49:16 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:29.410 11:49:16 setup.sh.hugepages -- setup/hugepages.sh@213 -- # run_test odd_alloc odd_alloc 00:03:29.410 11:49:16 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:29.410 11:49:16 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:29.410 11:49:16 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:29.410 ************************************ 00:03:29.410 START TEST odd_alloc 00:03:29.410 ************************************ 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1123 -- # odd_alloc 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@159 -- # get_test_nr_hugepages 2098176 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@49 -- # local size=2098176 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1025 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1025 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=512 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 513 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=513 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:29.410 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGEMEM=2049 00:03:29.411 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # HUGE_EVEN_ALLOC=yes 00:03:29.411 11:49:16 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@160 -- # setup output 00:03:29.411 11:49:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:29.411 11:49:16 setup.sh.hugepages.odd_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:31.956 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:31.956 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:31.956 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:31.956 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:31.956 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:31.956 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:31.956 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:31.956 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:31.956 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:31.956 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:31.956 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:31.956 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:31.956 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:31.956 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:31.956 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:31.956 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:31.956 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@161 -- # verify_nr_hugepages 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@89 -- # local node 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170363896 kB' 'MemAvailable: 173598352 kB' 'Buffers: 3896 kB' 'Cached: 14736728 kB' 'SwapCached: 0 kB' 'Active: 11611452 kB' 'Inactive: 3694312 kB' 'Active(anon): 11193496 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567884 kB' 'Mapped: 211928 kB' 'Shmem: 10628356 kB' 'KReclaimable: 532808 kB' 'Slab: 1185112 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 652304 kB' 'KernelStack: 20528 kB' 'PageTables: 8984 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12720796 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317112 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.956 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.957 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170363644 kB' 'MemAvailable: 173598100 kB' 'Buffers: 3896 kB' 'Cached: 14736732 kB' 'SwapCached: 0 kB' 'Active: 11611132 kB' 'Inactive: 3694312 kB' 'Active(anon): 11193176 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567600 kB' 'Mapped: 211876 kB' 'Shmem: 10628360 kB' 'KReclaimable: 532808 kB' 'Slab: 1185096 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 652288 kB' 'KernelStack: 20512 kB' 'PageTables: 8920 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12720812 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317080 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.958 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.959 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170364068 kB' 'MemAvailable: 173598524 kB' 'Buffers: 3896 kB' 'Cached: 14736748 kB' 'SwapCached: 0 kB' 'Active: 11610480 kB' 'Inactive: 3694312 kB' 'Active(anon): 11192524 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 567380 kB' 'Mapped: 211760 kB' 'Shmem: 10628376 kB' 'KReclaimable: 532808 kB' 'Slab: 1185088 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 652280 kB' 'KernelStack: 20496 kB' 'PageTables: 8856 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12720832 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317064 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.960 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.961 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1025 00:03:31.962 nr_hugepages=1025 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:31.962 resv_hugepages=0 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:31.962 surplus_hugepages=0 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:31.962 anon_hugepages=0 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@107 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@109 -- # (( 1025 == nr_hugepages )) 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:31.962 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node= 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170363312 kB' 'MemAvailable: 173597768 kB' 'Buffers: 3896 kB' 'Cached: 14736768 kB' 'SwapCached: 0 kB' 'Active: 11611620 kB' 'Inactive: 3694312 kB' 'Active(anon): 11193664 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 568556 kB' 'Mapped: 211764 kB' 'Shmem: 10628396 kB' 'KReclaimable: 532808 kB' 'Slab: 1185088 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 652280 kB' 'KernelStack: 20496 kB' 'PageTables: 8816 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103029572 kB' 'Committed_AS: 12732448 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317048 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1025' 'HugePages_Free: 1025' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2099200 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:31.963 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.226 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.227 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 1025 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@110 -- # (( 1025 == nr_hugepages + surp + resv )) 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@27 -- # local node 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=513 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=0 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92984292 kB' 'MemUsed: 4631336 kB' 'SwapCached: 0 kB' 'Active: 1996808 kB' 'Inactive: 216924 kB' 'Active(anon): 1834984 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2056160 kB' 'Mapped: 122516 kB' 'AnonPages: 160720 kB' 'Shmem: 1677412 kB' 'KernelStack: 11592 kB' 'PageTables: 4216 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347008 kB' 'Slab: 654464 kB' 'SReclaimable: 347008 kB' 'SUnreclaim: 307456 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.228 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@18 -- # local node=1 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@19 -- # local var val 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.229 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 77381076 kB' 'MemUsed: 16384432 kB' 'SwapCached: 0 kB' 'Active: 9613800 kB' 'Inactive: 3477388 kB' 'Active(anon): 9357668 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12684528 kB' 'Mapped: 89248 kB' 'AnonPages: 406728 kB' 'Shmem: 8951008 kB' 'KernelStack: 8904 kB' 'PageTables: 4776 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185800 kB' 'Slab: 530616 kB' 'SReclaimable: 185800 kB' 'SUnreclaim: 344816 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 513' 'HugePages_Free: 513' 'HugePages_Surp: 0' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.230 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # continue 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # echo 0 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/common.sh@33 -- # return 0 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 513' 00:03:32.231 node0=512 expecting 513 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@128 -- # echo 'node1=513 expecting 512' 00:03:32.231 node1=513 expecting 512 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- setup/hugepages.sh@130 -- # [[ 512 513 == \5\1\2\ \5\1\3 ]] 00:03:32.231 00:03:32.231 real 0m2.840s 00:03:32.231 user 0m1.174s 00:03:32.231 sys 0m1.736s 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:32.231 11:49:19 setup.sh.hugepages.odd_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:32.231 ************************************ 00:03:32.231 END TEST odd_alloc 00:03:32.231 ************************************ 00:03:32.231 11:49:19 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:32.231 11:49:19 setup.sh.hugepages -- setup/hugepages.sh@214 -- # run_test custom_alloc custom_alloc 00:03:32.231 11:49:19 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:32.231 11:49:19 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:32.231 11:49:19 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:32.231 ************************************ 00:03:32.231 START TEST custom_alloc 00:03:32.231 ************************************ 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1123 -- # custom_alloc 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@167 -- # local IFS=, 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@169 -- # local node 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # nodes_hp=() 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@170 -- # local nodes_hp 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@172 -- # local nr_hugepages=0 _nr_hugepages=0 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@174 -- # get_test_nr_hugepages 1048576 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=1048576 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=512 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=512 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 0 > 0 )) 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 256 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 1 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@82 -- # nodes_test[_no_nodes - 1]=256 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@83 -- # : 0 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@84 -- # : 0 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@81 -- # (( _no_nodes > 0 )) 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@175 -- # nodes_hp[0]=512 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@176 -- # (( 2 > 1 )) 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@177 -- # get_test_nr_hugepages 2097152 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@50 -- # (( 1 > 1 )) 00:03:32.231 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 1 > 0 )) 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@178 -- # nodes_hp[1]=1024 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@181 -- # for node in "${!nodes_hp[@]}" 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@182 -- # HUGENODE+=("nodes_hp[$node]=${nodes_hp[node]}") 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@183 -- # (( _nr_hugepages += nodes_hp[node] )) 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@186 -- # get_test_nr_hugepages_per_node 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # user_nodes=() 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@69 -- # (( 0 > 0 )) 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@74 -- # (( 2 > 0 )) 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=512 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@75 -- # for _no_nodes in "${!nodes_hp[@]}" 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@76 -- # nodes_test[_no_nodes]=1024 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@78 -- # return 0 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # HUGENODE='nodes_hp[0]=512,nodes_hp[1]=1024' 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@187 -- # setup output 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:32.232 11:49:19 setup.sh.hugepages.custom_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:34.774 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.774 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:34.774 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.774 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.774 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.774 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.774 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.774 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.774 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.774 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:34.774 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:34.774 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:34.774 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:34.774 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:34.774 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:34.774 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:34.774 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # nr_hugepages=1536 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@188 -- # verify_nr_hugepages 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@89 -- # local node 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.774 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169376256 kB' 'MemAvailable: 172610712 kB' 'Buffers: 3896 kB' 'Cached: 14736884 kB' 'SwapCached: 0 kB' 'Active: 11613972 kB' 'Inactive: 3694312 kB' 'Active(anon): 11196016 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570272 kB' 'Mapped: 211876 kB' 'Shmem: 10628512 kB' 'KReclaimable: 532808 kB' 'Slab: 1184744 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 651936 kB' 'KernelStack: 20656 kB' 'PageTables: 9536 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12722592 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:34.775 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.042 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169378276 kB' 'MemAvailable: 172612732 kB' 'Buffers: 3896 kB' 'Cached: 14736884 kB' 'SwapCached: 0 kB' 'Active: 11613348 kB' 'Inactive: 3694312 kB' 'Active(anon): 11195392 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570096 kB' 'Mapped: 211796 kB' 'Shmem: 10628512 kB' 'KReclaimable: 532808 kB' 'Slab: 1184712 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 651904 kB' 'KernelStack: 20800 kB' 'PageTables: 9668 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12722612 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317208 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.043 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.044 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169376328 kB' 'MemAvailable: 172610784 kB' 'Buffers: 3896 kB' 'Cached: 14736904 kB' 'SwapCached: 0 kB' 'Active: 11613036 kB' 'Inactive: 3694312 kB' 'Active(anon): 11195080 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569780 kB' 'Mapped: 211788 kB' 'Shmem: 10628532 kB' 'KReclaimable: 532808 kB' 'Slab: 1184712 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 651904 kB' 'KernelStack: 20848 kB' 'PageTables: 9648 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12724124 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317224 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.045 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.046 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1536 00:03:35.047 nr_hugepages=1536 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:35.047 resv_hugepages=0 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:35.047 surplus_hugepages=0 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:35.047 anon_hugepages=0 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@107 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@109 -- # (( 1536 == nr_hugepages )) 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node= 00:03:35.047 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 169376272 kB' 'MemAvailable: 172610728 kB' 'Buffers: 3896 kB' 'Cached: 14736928 kB' 'SwapCached: 0 kB' 'Active: 11613320 kB' 'Inactive: 3694312 kB' 'Active(anon): 11195364 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569960 kB' 'Mapped: 211772 kB' 'Shmem: 10628556 kB' 'KReclaimable: 532808 kB' 'Slab: 1184712 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 651904 kB' 'KernelStack: 20992 kB' 'PageTables: 10432 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 102506308 kB' 'Committed_AS: 12724148 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317272 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1536' 'HugePages_Free: 1536' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 3145728 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.048 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 1536 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@110 -- # (( 1536 == nr_hugepages + surp + resv )) 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@27 -- # local node 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=512 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.049 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=0 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 92984572 kB' 'MemUsed: 4631056 kB' 'SwapCached: 0 kB' 'Active: 1997444 kB' 'Inactive: 216924 kB' 'Active(anon): 1835620 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2056264 kB' 'Mapped: 122524 kB' 'AnonPages: 160676 kB' 'Shmem: 1677516 kB' 'KernelStack: 11576 kB' 'PageTables: 4080 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347008 kB' 'Slab: 654364 kB' 'SReclaimable: 347008 kB' 'SUnreclaim: 307356 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 512' 'HugePages_Free: 512' 'HugePages_Surp: 0' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.050 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 1 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@18 -- # local node=1 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@19 -- # local var val 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node1/meminfo ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node1/meminfo 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 93765508 kB' 'MemFree: 76389232 kB' 'MemUsed: 17376276 kB' 'SwapCached: 0 kB' 'Active: 9616240 kB' 'Inactive: 3477388 kB' 'Active(anon): 9360108 kB' 'Inactive(anon): 0 kB' 'Active(file): 256132 kB' 'Inactive(file): 3477388 kB' 'Unevictable: 0 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 12684560 kB' 'Mapped: 89248 kB' 'AnonPages: 409144 kB' 'Shmem: 8951040 kB' 'KernelStack: 9304 kB' 'PageTables: 5992 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 185800 kB' 'Slab: 530348 kB' 'SReclaimable: 185800 kB' 'SUnreclaim: 344548 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.051 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.052 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # continue 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # echo 0 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/common.sh@33 -- # return 0 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node0=512 expecting 512' 00:03:35.053 node0=512 expecting 512 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@128 -- # echo 'node1=1024 expecting 1024' 00:03:35.053 node1=1024 expecting 1024 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- setup/hugepages.sh@130 -- # [[ 512,1024 == \5\1\2\,\1\0\2\4 ]] 00:03:35.053 00:03:35.053 real 0m2.845s 00:03:35.053 user 0m1.170s 00:03:35.053 sys 0m1.745s 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:35.053 11:49:22 setup.sh.hugepages.custom_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:35.053 ************************************ 00:03:35.053 END TEST custom_alloc 00:03:35.053 ************************************ 00:03:35.053 11:49:22 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:35.053 11:49:22 setup.sh.hugepages -- setup/hugepages.sh@215 -- # run_test no_shrink_alloc no_shrink_alloc 00:03:35.053 11:49:22 setup.sh.hugepages -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:35.053 11:49:22 setup.sh.hugepages -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:35.053 11:49:22 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:35.053 ************************************ 00:03:35.053 START TEST no_shrink_alloc 00:03:35.053 ************************************ 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1123 -- # no_shrink_alloc 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@195 -- # get_test_nr_hugepages 2097152 0 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@49 -- # local size=2097152 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@50 -- # (( 2 > 1 )) 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@51 -- # shift 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # node_ids=('0') 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@52 -- # local node_ids 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@55 -- # (( size >= default_hugepages )) 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@57 -- # nr_hugepages=1024 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@58 -- # get_test_nr_hugepages_per_node 0 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # user_nodes=('0') 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@62 -- # local user_nodes 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@64 -- # local _nr_hugepages=1024 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@65 -- # local _no_nodes=2 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # nodes_test=() 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@67 -- # local -g nodes_test 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@69 -- # (( 1 > 0 )) 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@70 -- # for _no_nodes in "${user_nodes[@]}" 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@71 -- # nodes_test[_no_nodes]=1024 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@73 -- # return 0 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@198 -- # setup output 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:35.053 11:49:22 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:37.596 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:37.596 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:37.596 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:37.596 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:37.596 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:37.596 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:37.596 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:37.596 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:37.596 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:37.596 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:37.596 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:37.596 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:37.596 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:37.596 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:37.596 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:37.596 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:37.596 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:37.861 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@199 -- # verify_nr_hugepages 00:03:37.861 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:37.861 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:37.861 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:37.861 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:37.861 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:37.861 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170402248 kB' 'MemAvailable: 173636704 kB' 'Buffers: 3896 kB' 'Cached: 14737024 kB' 'SwapCached: 0 kB' 'Active: 11614232 kB' 'Inactive: 3694312 kB' 'Active(anon): 11196276 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570900 kB' 'Mapped: 212348 kB' 'Shmem: 10628652 kB' 'KReclaimable: 532808 kB' 'Slab: 1184932 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 652124 kB' 'KernelStack: 20880 kB' 'PageTables: 10132 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12725548 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317240 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.862 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170423016 kB' 'MemAvailable: 173657472 kB' 'Buffers: 3896 kB' 'Cached: 14737024 kB' 'SwapCached: 0 kB' 'Active: 11616052 kB' 'Inactive: 3694312 kB' 'Active(anon): 11198096 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 571656 kB' 'Mapped: 212436 kB' 'Shmem: 10628652 kB' 'KReclaimable: 532808 kB' 'Slab: 1184852 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 652044 kB' 'KernelStack: 20624 kB' 'PageTables: 9480 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12727420 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:37.863 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.864 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.865 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170418104 kB' 'MemAvailable: 173652560 kB' 'Buffers: 3896 kB' 'Cached: 14737048 kB' 'SwapCached: 0 kB' 'Active: 11618948 kB' 'Inactive: 3694312 kB' 'Active(anon): 11200992 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575584 kB' 'Mapped: 212712 kB' 'Shmem: 10628676 kB' 'KReclaimable: 532808 kB' 'Slab: 1184864 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 652056 kB' 'KernelStack: 20880 kB' 'PageTables: 9756 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12731120 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317292 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.866 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.867 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:37.868 nr_hugepages=1024 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:37.868 resv_hugepages=0 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:37.868 surplus_hugepages=0 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:37.868 anon_hugepages=0 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170418528 kB' 'MemAvailable: 173652984 kB' 'Buffers: 3896 kB' 'Cached: 14737080 kB' 'SwapCached: 0 kB' 'Active: 11618840 kB' 'Inactive: 3694312 kB' 'Active(anon): 11200884 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 575424 kB' 'Mapped: 212704 kB' 'Shmem: 10628708 kB' 'KReclaimable: 532808 kB' 'Slab: 1184864 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 652056 kB' 'KernelStack: 20688 kB' 'PageTables: 9428 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12731144 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317308 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:49:24 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.868 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.868 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.868 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.868 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.868 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.868 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.868 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.868 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.869 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91936716 kB' 'MemUsed: 5678912 kB' 'SwapCached: 0 kB' 'Active: 1998932 kB' 'Inactive: 216924 kB' 'Active(anon): 1837108 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2056432 kB' 'Mapped: 122540 kB' 'AnonPages: 162628 kB' 'Shmem: 1677684 kB' 'KernelStack: 11592 kB' 'PageTables: 4136 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347008 kB' 'Slab: 654304 kB' 'SReclaimable: 347008 kB' 'SUnreclaim: 307296 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.870 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.871 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:37.872 node0=1024 expecting 1024 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # CLEAR_HUGE=no 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # NRHUGE=512 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@202 -- # setup output 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@9 -- # [[ output == output ]] 00:03:37.872 11:49:25 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:03:40.454 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.454 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:03:40.454 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.454 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.454 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.454 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.454 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.454 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.454 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.454 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:03:40.454 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:03:40.454 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:03:40.454 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:03:40.454 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:03:40.454 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:03:40.454 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:03:40.454 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:03:40.454 INFO: Requested 512 hugepages but 1024 already allocated on node0 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@204 -- # verify_nr_hugepages 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@89 -- # local node 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@90 -- # local sorted_t 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@91 -- # local sorted_s 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@92 -- # local surp 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@93 -- # local resv 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@94 -- # local anon 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@96 -- # [[ always [madvise] never != *\[\n\e\v\e\r\]* ]] 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # get_meminfo AnonHugePages 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=AnonHugePages 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170434020 kB' 'MemAvailable: 173668476 kB' 'Buffers: 3896 kB' 'Cached: 14737160 kB' 'SwapCached: 0 kB' 'Active: 11614052 kB' 'Inactive: 3694312 kB' 'Active(anon): 11196096 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570008 kB' 'Mapped: 211928 kB' 'Shmem: 10628788 kB' 'KReclaimable: 532808 kB' 'Slab: 1185452 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 652644 kB' 'KernelStack: 20672 kB' 'PageTables: 9404 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12723856 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317128 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.454 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.455 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \A\n\o\n\H\u\g\e\P\a\g\e\s ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@97 -- # anon=0 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # get_meminfo HugePages_Surp 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170434544 kB' 'MemAvailable: 173669000 kB' 'Buffers: 3896 kB' 'Cached: 14737164 kB' 'SwapCached: 0 kB' 'Active: 11613896 kB' 'Inactive: 3694312 kB' 'Active(anon): 11195940 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 569884 kB' 'Mapped: 211884 kB' 'Shmem: 10628792 kB' 'KReclaimable: 532808 kB' 'Slab: 1185344 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 652536 kB' 'KernelStack: 20592 kB' 'PageTables: 9376 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12725364 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.456 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.457 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@99 -- # surp=0 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # get_meminfo HugePages_Rsvd 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Rsvd 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170435872 kB' 'MemAvailable: 173670328 kB' 'Buffers: 3896 kB' 'Cached: 14737184 kB' 'SwapCached: 0 kB' 'Active: 11613676 kB' 'Inactive: 3694312 kB' 'Active(anon): 11195720 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570060 kB' 'Mapped: 211800 kB' 'Shmem: 10628812 kB' 'KReclaimable: 532808 kB' 'Slab: 1185352 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 652544 kB' 'KernelStack: 20672 kB' 'PageTables: 9468 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12725388 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317176 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.458 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.459 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.722 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.722 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.722 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.722 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.722 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.722 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.722 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.722 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.722 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.722 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.722 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.722 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.722 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.722 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.722 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.722 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.722 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.723 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Rsvd == \H\u\g\e\P\a\g\e\s\_\R\s\v\d ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@100 -- # resv=0 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@102 -- # echo nr_hugepages=1024 00:03:40.724 nr_hugepages=1024 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@103 -- # echo resv_hugepages=0 00:03:40.724 resv_hugepages=0 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@104 -- # echo surplus_hugepages=0 00:03:40.724 surplus_hugepages=0 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@105 -- # echo anon_hugepages=0 00:03:40.724 anon_hugepages=0 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@107 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@109 -- # (( 1024 == nr_hugepages )) 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # get_meminfo HugePages_Total 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Total 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node= 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node/meminfo ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@25 -- # [[ -n '' ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 191381136 kB' 'MemFree: 170436916 kB' 'MemAvailable: 173671372 kB' 'Buffers: 3896 kB' 'Cached: 14737204 kB' 'SwapCached: 0 kB' 'Active: 11613740 kB' 'Inactive: 3694312 kB' 'Active(anon): 11195784 kB' 'Inactive(anon): 0 kB' 'Active(file): 417956 kB' 'Inactive(file): 3694312 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'SwapTotal: 8388604 kB' 'SwapFree: 8388604 kB' 'Zswap: 0 kB' 'Zswapped: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'AnonPages: 570096 kB' 'Mapped: 211800 kB' 'Shmem: 10628832 kB' 'KReclaimable: 532808 kB' 'Slab: 1185352 kB' 'SReclaimable: 532808 kB' 'SUnreclaim: 652544 kB' 'KernelStack: 20688 kB' 'PageTables: 9372 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'CommitLimit: 103030596 kB' 'Committed_AS: 12723916 kB' 'VmallocTotal: 34359738367 kB' 'VmallocUsed: 317208 kB' 'VmallocChunk: 0 kB' 'Percpu: 120192 kB' 'HardwareCorrupted: 0 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'CmaTotal: 0 kB' 'CmaFree: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Rsvd: 0' 'HugePages_Surp: 0' 'Hugepagesize: 2048 kB' 'Hugetlb: 2097152 kB' 'DirectMap4k: 3947476 kB' 'DirectMap2M: 33480704 kB' 'DirectMap1G: 164626432 kB' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemAvailable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Buffers == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Cached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.724 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswap == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Zswapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CommitLimit == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.725 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Committed_AS == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocUsed == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ VmallocChunk == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Percpu == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HardwareCorrupted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaTotal == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ CmaFree == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\T\o\t\a\l ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 1024 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@110 -- # (( 1024 == nr_hugepages + surp + resv )) 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@112 -- # get_nodes 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@27 -- # local node 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=1024 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@29 -- # for node in /sys/devices/system/node/node+([0-9]) 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@30 -- # nodes_sys[${node##*node}]=0 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@32 -- # no_nodes=2 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@33 -- # (( no_nodes > 0 )) 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@115 -- # for node in "${!nodes_test[@]}" 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@116 -- # (( nodes_test[node] += resv )) 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # get_meminfo HugePages_Surp 0 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@17 -- # local get=HugePages_Surp 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@18 -- # local node=0 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@19 -- # local var val 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@20 -- # local mem_f mem 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@22 -- # mem_f=/proc/meminfo 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@23 -- # [[ -e /sys/devices/system/node/node0/meminfo ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@24 -- # mem_f=/sys/devices/system/node/node0/meminfo 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@28 -- # mapfile -t mem 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@29 -- # mem=("${mem[@]#Node +([0-9]) }") 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@16 -- # printf '%s\n' 'MemTotal: 97615628 kB' 'MemFree: 91932624 kB' 'MemUsed: 5683004 kB' 'SwapCached: 0 kB' 'Active: 1998972 kB' 'Inactive: 216924 kB' 'Active(anon): 1837148 kB' 'Inactive(anon): 0 kB' 'Active(file): 161824 kB' 'Inactive(file): 216924 kB' 'Unevictable: 3072 kB' 'Mlocked: 0 kB' 'Dirty: 0 kB' 'Writeback: 0 kB' 'FilePages: 2056528 kB' 'Mapped: 122552 kB' 'AnonPages: 162492 kB' 'Shmem: 1677780 kB' 'KernelStack: 11544 kB' 'PageTables: 3996 kB' 'SecPageTables: 0 kB' 'NFS_Unstable: 0 kB' 'Bounce: 0 kB' 'WritebackTmp: 0 kB' 'KReclaimable: 347008 kB' 'Slab: 654360 kB' 'SReclaimable: 347008 kB' 'SUnreclaim: 307352 kB' 'AnonHugePages: 0 kB' 'ShmemHugePages: 0 kB' 'ShmemPmdMapped: 0 kB' 'FileHugePages: 0 kB' 'FilePmdMapped: 0 kB' 'Unaccepted: 0 kB' 'HugePages_Total: 1024' 'HugePages_Free: 1024' 'HugePages_Surp: 0' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemTotal == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemFree == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ MemUsed == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SwapCached == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.726 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(anon) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Active(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Inactive(file) == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unevictable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mlocked == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Dirty == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Writeback == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Mapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonPages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Shmem == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KernelStack == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ PageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SecPageTables == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ NFS_Unstable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Bounce == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ WritebackTmp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ KReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Slab == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.727 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SReclaimable == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ SUnreclaim == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ AnonHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ ShmemPmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FileHugePages == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ FilePmdMapped == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ Unaccepted == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Total == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Free == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # continue 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # IFS=': ' 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@31 -- # read -r var val _ 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@32 -- # [[ HugePages_Surp == \H\u\g\e\P\a\g\e\s\_\S\u\r\p ]] 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # echo 0 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/common.sh@33 -- # return 0 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@117 -- # (( nodes_test[node] += 0 )) 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@126 -- # for node in "${!nodes_test[@]}" 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_t[nodes_test[node]]=1 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@127 -- # sorted_s[nodes_sys[node]]=1 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@128 -- # echo 'node0=1024 expecting 1024' 00:03:40.728 node0=1024 expecting 1024 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- setup/hugepages.sh@130 -- # [[ 1024 == \1\0\2\4 ]] 00:03:40.728 00:03:40.728 real 0m5.492s 00:03:40.728 user 0m2.193s 00:03:40.728 sys 0m3.421s 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.728 11:49:27 setup.sh.hugepages.no_shrink_alloc -- common/autotest_common.sh@10 -- # set +x 00:03:40.728 ************************************ 00:03:40.728 END TEST no_shrink_alloc 00:03:40.728 ************************************ 00:03:40.728 11:49:27 setup.sh.hugepages -- common/autotest_common.sh@1142 -- # return 0 00:03:40.728 11:49:27 setup.sh.hugepages -- setup/hugepages.sh@217 -- # clear_hp 00:03:40.728 11:49:27 setup.sh.hugepages -- setup/hugepages.sh@37 -- # local node hp 00:03:40.728 11:49:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.728 11:49:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.728 11:49:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.728 11:49:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.728 11:49:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.728 11:49:27 setup.sh.hugepages -- setup/hugepages.sh@39 -- # for node in "${!nodes_sys[@]}" 00:03:40.728 11:49:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.728 11:49:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.728 11:49:27 setup.sh.hugepages -- setup/hugepages.sh@40 -- # for hp in "/sys/devices/system/node/node$node/hugepages/hugepages-"* 00:03:40.728 11:49:27 setup.sh.hugepages -- setup/hugepages.sh@41 -- # echo 0 00:03:40.728 11:49:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # export CLEAR_HUGE=yes 00:03:40.728 11:49:27 setup.sh.hugepages -- setup/hugepages.sh@45 -- # CLEAR_HUGE=yes 00:03:40.728 00:03:40.728 real 0m20.450s 00:03:40.728 user 0m7.785s 00:03:40.728 sys 0m11.968s 00:03:40.728 11:49:27 setup.sh.hugepages -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:40.728 11:49:27 setup.sh.hugepages -- common/autotest_common.sh@10 -- # set +x 00:03:40.728 ************************************ 00:03:40.728 END TEST hugepages 00:03:40.728 ************************************ 00:03:40.728 11:49:27 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:40.728 11:49:27 setup.sh -- setup/test-setup.sh@14 -- # run_test driver /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:40.728 11:49:27 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:40.728 11:49:27 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:40.728 11:49:27 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:40.728 ************************************ 00:03:40.728 START TEST driver 00:03:40.728 ************************************ 00:03:40.728 11:49:27 setup.sh.driver -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/driver.sh 00:03:40.728 * Looking for test storage... 00:03:40.728 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:40.728 11:49:27 setup.sh.driver -- setup/driver.sh@68 -- # setup reset 00:03:40.728 11:49:27 setup.sh.driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:40.728 11:49:27 setup.sh.driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:44.929 11:49:31 setup.sh.driver -- setup/driver.sh@69 -- # run_test guess_driver guess_driver 00:03:44.929 11:49:31 setup.sh.driver -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:44.929 11:49:31 setup.sh.driver -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:44.929 11:49:31 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:44.929 ************************************ 00:03:44.929 START TEST guess_driver 00:03:44.929 ************************************ 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- common/autotest_common.sh@1123 -- # guess_driver 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@46 -- # local driver setup_driver marker 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@47 -- # local fail=0 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # pick_driver 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@36 -- # vfio 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@21 -- # local iommu_grups 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@22 -- # local unsafe_vfio 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@24 -- # [[ -e /sys/module/vfio/parameters/enable_unsafe_noiommu_mode ]] 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@25 -- # unsafe_vfio=N 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@27 -- # iommu_groups=(/sys/kernel/iommu_groups/*) 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@29 -- # (( 174 > 0 )) 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # is_driver vfio_pci 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@14 -- # mod vfio_pci 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # dep vfio_pci 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@11 -- # modprobe --show-depends vfio_pci 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@12 -- # [[ insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/virt/lib/irqbypass.ko.xz 00:03:44.929 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:44.929 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:44.929 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/iommu/iommufd/iommufd.ko.xz 00:03:44.929 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio.ko.xz 00:03:44.929 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/vfio_iommu_type1.ko.xz 00:03:44.929 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci-core.ko.xz 00:03:44.929 insmod /lib/modules/6.7.0-68.fc38.x86_64/kernel/drivers/vfio/pci/vfio-pci.ko.xz == *\.\k\o* ]] 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@30 -- # return 0 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@37 -- # echo vfio-pci 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@49 -- # driver=vfio-pci 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@51 -- # [[ vfio-pci == \N\o\ \v\a\l\i\d\ \d\r\i\v\e\r\ \f\o\u\n\d ]] 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@56 -- # echo 'Looking for driver=vfio-pci' 00:03:44.929 Looking for driver=vfio-pci 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/driver.sh@45 -- # setup output config 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ output == output ]] 00:03:44.929 11:49:31 setup.sh.driver.guess_driver -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:03:47.477 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.477 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.477 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.477 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.477 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.477 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.477 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.477 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.477 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:47.478 11:49:34 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.418 11:49:35 setup.sh.driver.guess_driver -- setup/driver.sh@58 -- # [[ -> == \-\> ]] 00:03:48.418 11:49:35 setup.sh.driver.guess_driver -- setup/driver.sh@61 -- # [[ vfio-pci == vfio-pci ]] 00:03:48.418 11:49:35 setup.sh.driver.guess_driver -- setup/driver.sh@57 -- # read -r _ _ _ _ marker setup_driver 00:03:48.418 11:49:35 setup.sh.driver.guess_driver -- setup/driver.sh@64 -- # (( fail == 0 )) 00:03:48.418 11:49:35 setup.sh.driver.guess_driver -- setup/driver.sh@65 -- # setup reset 00:03:48.418 11:49:35 setup.sh.driver.guess_driver -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:48.418 11:49:35 setup.sh.driver.guess_driver -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:52.621 00:03:52.621 real 0m7.416s 00:03:52.621 user 0m2.093s 00:03:52.621 sys 0m3.760s 00:03:52.621 11:49:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.621 11:49:39 setup.sh.driver.guess_driver -- common/autotest_common.sh@10 -- # set +x 00:03:52.621 ************************************ 00:03:52.621 END TEST guess_driver 00:03:52.621 ************************************ 00:03:52.621 11:49:39 setup.sh.driver -- common/autotest_common.sh@1142 -- # return 0 00:03:52.621 00:03:52.621 real 0m11.314s 00:03:52.622 user 0m3.215s 00:03:52.622 sys 0m5.818s 00:03:52.622 11:49:39 setup.sh.driver -- common/autotest_common.sh@1124 -- # xtrace_disable 00:03:52.622 11:49:39 setup.sh.driver -- common/autotest_common.sh@10 -- # set +x 00:03:52.622 ************************************ 00:03:52.622 END TEST driver 00:03:52.622 ************************************ 00:03:52.622 11:49:39 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:03:52.622 11:49:39 setup.sh -- setup/test-setup.sh@15 -- # run_test devices /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:52.622 11:49:39 setup.sh -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:52.622 11:49:39 setup.sh -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:52.622 11:49:39 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:03:52.622 ************************************ 00:03:52.622 START TEST devices 00:03:52.622 ************************************ 00:03:52.622 11:49:39 setup.sh.devices -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/devices.sh 00:03:52.622 * Looking for test storage... 00:03:52.622 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup 00:03:52.622 11:49:39 setup.sh.devices -- setup/devices.sh@190 -- # trap cleanup EXIT 00:03:52.622 11:49:39 setup.sh.devices -- setup/devices.sh@192 -- # setup reset 00:03:52.622 11:49:39 setup.sh.devices -- setup/common.sh@9 -- # [[ reset == output ]] 00:03:52.622 11:49:39 setup.sh.devices -- setup/common.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:03:55.163 11:49:42 setup.sh.devices -- setup/devices.sh@194 -- # get_zoned_devs 00:03:55.163 11:49:42 setup.sh.devices -- common/autotest_common.sh@1669 -- # zoned_devs=() 00:03:55.163 11:49:42 setup.sh.devices -- common/autotest_common.sh@1669 -- # local -gA zoned_devs 00:03:55.163 11:49:42 setup.sh.devices -- common/autotest_common.sh@1670 -- # local nvme bdf 00:03:55.163 11:49:42 setup.sh.devices -- common/autotest_common.sh@1672 -- # for nvme in /sys/block/nvme* 00:03:55.163 11:49:42 setup.sh.devices -- common/autotest_common.sh@1673 -- # is_block_zoned nvme0n1 00:03:55.163 11:49:42 setup.sh.devices -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:03:55.163 11:49:42 setup.sh.devices -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:55.163 11:49:42 setup.sh.devices -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:03:55.163 11:49:42 setup.sh.devices -- setup/devices.sh@196 -- # blocks=() 00:03:55.163 11:49:42 setup.sh.devices -- setup/devices.sh@196 -- # declare -a blocks 00:03:55.163 11:49:42 setup.sh.devices -- setup/devices.sh@197 -- # blocks_to_pci=() 00:03:55.163 11:49:42 setup.sh.devices -- setup/devices.sh@197 -- # declare -A blocks_to_pci 00:03:55.163 11:49:42 setup.sh.devices -- setup/devices.sh@198 -- # min_disk_size=3221225472 00:03:55.163 11:49:42 setup.sh.devices -- setup/devices.sh@200 -- # for block in "/sys/block/nvme"!(*c*) 00:03:55.163 11:49:42 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0n1 00:03:55.163 11:49:42 setup.sh.devices -- setup/devices.sh@201 -- # ctrl=nvme0 00:03:55.163 11:49:42 setup.sh.devices -- setup/devices.sh@202 -- # pci=0000:5e:00.0 00:03:55.163 11:49:42 setup.sh.devices -- setup/devices.sh@203 -- # [[ '' == *\0\0\0\0\:\5\e\:\0\0\.\0* ]] 00:03:55.163 11:49:42 setup.sh.devices -- setup/devices.sh@204 -- # block_in_use nvme0n1 00:03:55.163 11:49:42 setup.sh.devices -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:03:55.163 11:49:42 setup.sh.devices -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:03:55.423 No valid GPT data, bailing 00:03:55.423 11:49:42 setup.sh.devices -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:55.423 11:49:42 setup.sh.devices -- scripts/common.sh@391 -- # pt= 00:03:55.423 11:49:42 setup.sh.devices -- scripts/common.sh@392 -- # return 1 00:03:55.423 11:49:42 setup.sh.devices -- setup/devices.sh@204 -- # sec_size_to_bytes nvme0n1 00:03:55.423 11:49:42 setup.sh.devices -- setup/common.sh@76 -- # local dev=nvme0n1 00:03:55.423 11:49:42 setup.sh.devices -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:03:55.423 11:49:42 setup.sh.devices -- setup/common.sh@80 -- # echo 1000204886016 00:03:55.423 11:49:42 setup.sh.devices -- setup/devices.sh@204 -- # (( 1000204886016 >= min_disk_size )) 00:03:55.423 11:49:42 setup.sh.devices -- setup/devices.sh@205 -- # blocks+=("${block##*/}") 00:03:55.423 11:49:42 setup.sh.devices -- setup/devices.sh@206 -- # blocks_to_pci["${block##*/}"]=0000:5e:00.0 00:03:55.423 11:49:42 setup.sh.devices -- setup/devices.sh@209 -- # (( 1 > 0 )) 00:03:55.423 11:49:42 setup.sh.devices -- setup/devices.sh@211 -- # declare -r test_disk=nvme0n1 00:03:55.423 11:49:42 setup.sh.devices -- setup/devices.sh@213 -- # run_test nvme_mount nvme_mount 00:03:55.423 11:49:42 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:03:55.423 11:49:42 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:03:55.423 11:49:42 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:03:55.423 ************************************ 00:03:55.423 START TEST nvme_mount 00:03:55.423 ************************************ 00:03:55.423 11:49:42 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1123 -- # nvme_mount 00:03:55.423 11:49:42 setup.sh.devices.nvme_mount -- setup/devices.sh@95 -- # nvme_disk=nvme0n1 00:03:55.423 11:49:42 setup.sh.devices.nvme_mount -- setup/devices.sh@96 -- # nvme_disk_p=nvme0n1p1 00:03:55.423 11:49:42 setup.sh.devices.nvme_mount -- setup/devices.sh@97 -- # nvme_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:55.423 11:49:42 setup.sh.devices.nvme_mount -- setup/devices.sh@98 -- # nvme_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:55.424 11:49:42 setup.sh.devices.nvme_mount -- setup/devices.sh@101 -- # partition_drive nvme0n1 1 00:03:55.424 11:49:42 setup.sh.devices.nvme_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:03:55.424 11:49:42 setup.sh.devices.nvme_mount -- setup/common.sh@40 -- # local part_no=1 00:03:55.424 11:49:42 setup.sh.devices.nvme_mount -- setup/common.sh@41 -- # local size=1073741824 00:03:55.424 11:49:42 setup.sh.devices.nvme_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:03:55.424 11:49:42 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # parts=() 00:03:55.424 11:49:42 setup.sh.devices.nvme_mount -- setup/common.sh@44 -- # local parts 00:03:55.424 11:49:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:03:55.424 11:49:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.424 11:49:42 setup.sh.devices.nvme_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:03:55.424 11:49:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part++ )) 00:03:55.424 11:49:42 setup.sh.devices.nvme_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:03:55.424 11:49:42 setup.sh.devices.nvme_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:03:55.424 11:49:42 setup.sh.devices.nvme_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:03:55.424 11:49:42 setup.sh.devices.nvme_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 00:03:56.373 Creating new GPT entries in memory. 00:03:56.373 GPT data structures destroyed! You may now partition the disk using fdisk or 00:03:56.373 other utilities. 00:03:56.373 11:49:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:03:56.373 11:49:43 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:56.373 11:49:43 setup.sh.devices.nvme_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:03:56.373 11:49:43 setup.sh.devices.nvme_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:03:56.373 11:49:43 setup.sh.devices.nvme_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:03:57.311 Creating new GPT entries in memory. 00:03:57.312 The operation has completed successfully. 00:03:57.312 11:49:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part++ )) 00:03:57.312 11:49:44 setup.sh.devices.nvme_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:03:57.312 11:49:44 setup.sh.devices.nvme_mount -- setup/common.sh@62 -- # wait 127804 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/devices.sh@102 -- # mkfs /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1p1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size= 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1p1 ]] 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1p1 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/devices.sh@105 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1p1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1p1 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:03:57.571 11:49:44 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1p1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1\p\1* ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@110 -- # cleanup_nvme 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@21 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:00.112 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:00.112 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:00.373 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:00.373 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:00.373 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:00.373 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@113 -- # mkfs /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 1024M 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/common.sh@66 -- # local dev=/dev/nvme0n1 mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount size=1024M 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/common.sh@70 -- # [[ -e /dev/nvme0n1 ]] 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/nvme0n1 1024M 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/common.sh@72 -- # mount /dev/nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@116 -- # verify 0000:5e:00.0 nvme0n1:nvme0n1 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme0n1 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@56 -- # : 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:00.373 11:49:47 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: mount@nvme0n1:nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\0\n\1* ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount ]] 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.954 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme ]] 00:04:02.955 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount/test_nvme 00:04:02.955 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@123 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:02.955 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@125 -- # verify 0000:5e:00.0 data@nvme0n1 '' '' 00:04:02.955 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:02.955 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@49 -- # local mounts=data@nvme0n1 00:04:02.955 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:02.955 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@51 -- # local test_file= 00:04:02.955 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@53 -- # local found=0 00:04:02.955 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:02.955 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@59 -- # local pci status 00:04:02.955 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:02.955 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:02.955 11:49:49 setup.sh.devices.nvme_mount -- setup/devices.sh@47 -- # setup output config 00:04:02.955 11:49:49 setup.sh.devices.nvme_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:02.955 11:49:49 setup.sh.devices.nvme_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ Active devices: data@nvme0n1, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\d\a\t\a\@\n\v\m\e\0\n\1* ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@63 -- # found=1 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@68 -- # return 0 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@128 -- # cleanup_nvme 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:05.497 /dev/nvme0n1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:05.497 00:04:05.497 real 0m9.819s 00:04:05.497 user 0m2.715s 00:04:05.497 sys 0m4.751s 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:05.497 11:49:52 setup.sh.devices.nvme_mount -- common/autotest_common.sh@10 -- # set +x 00:04:05.497 ************************************ 00:04:05.497 END TEST nvme_mount 00:04:05.497 ************************************ 00:04:05.497 11:49:52 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:05.497 11:49:52 setup.sh.devices -- setup/devices.sh@214 -- # run_test dm_mount dm_mount 00:04:05.497 11:49:52 setup.sh.devices -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:05.497 11:49:52 setup.sh.devices -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:05.497 11:49:52 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:05.497 ************************************ 00:04:05.497 START TEST dm_mount 00:04:05.497 ************************************ 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- common/autotest_common.sh@1123 -- # dm_mount 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/devices.sh@144 -- # pv=nvme0n1 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/devices.sh@145 -- # pv0=nvme0n1p1 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/devices.sh@146 -- # pv1=nvme0n1p2 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/devices.sh@148 -- # partition_drive nvme0n1 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/common.sh@39 -- # local disk=nvme0n1 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/common.sh@40 -- # local part_no=2 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/common.sh@41 -- # local size=1073741824 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/common.sh@43 -- # local part part_start=0 part_end=0 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # parts=() 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/common.sh@44 -- # local parts 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part = 1 )) 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/common.sh@47 -- # parts+=("${disk}p$part") 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part++ )) 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/common.sh@46 -- # (( part <= part_no )) 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/common.sh@51 -- # (( size /= 512 )) 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/common.sh@56 -- # sgdisk /dev/nvme0n1 --zap-all 00:04:05.497 11:49:52 setup.sh.devices.dm_mount -- setup/common.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/sync_dev_uevents.sh block/partition nvme0n1p1 nvme0n1p2 00:04:06.438 Creating new GPT entries in memory. 00:04:06.438 GPT data structures destroyed! You may now partition the disk using fdisk or 00:04:06.438 other utilities. 00:04:06.438 11:49:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part = 1 )) 00:04:06.438 11:49:53 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:06.438 11:49:53 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:06.438 11:49:53 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:06.438 11:49:53 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=1:2048:2099199 00:04:07.379 Creating new GPT entries in memory. 00:04:07.379 The operation has completed successfully. 00:04:07.379 11:49:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:07.379 11:49:54 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:07.379 11:49:54 setup.sh.devices.dm_mount -- setup/common.sh@58 -- # (( part_start = part_start == 0 ? 2048 : part_end + 1 )) 00:04:07.379 11:49:54 setup.sh.devices.dm_mount -- setup/common.sh@59 -- # (( part_end = part_start + size - 1 )) 00:04:07.379 11:49:54 setup.sh.devices.dm_mount -- setup/common.sh@60 -- # flock /dev/nvme0n1 sgdisk /dev/nvme0n1 --new=2:2099200:4196351 00:04:08.320 The operation has completed successfully. 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part++ )) 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/common.sh@57 -- # (( part <= part_no )) 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/common.sh@62 -- # wait 131767 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@150 -- # dm_name=nvme_dm_test 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@151 -- # dm_mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@152 -- # dm_dummy_test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@155 -- # dmsetup create nvme_dm_test 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@160 -- # for t in {1..5} 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@161 -- # break 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@164 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # readlink -f /dev/mapper/nvme_dm_test 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@165 -- # dm=/dev/dm-2 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@166 -- # dm=dm-2 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@168 -- # [[ -e /sys/class/block/nvme0n1p1/holders/dm-2 ]] 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@169 -- # [[ -e /sys/class/block/nvme0n1p2/holders/dm-2 ]] 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@171 -- # mkfs /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/common.sh@66 -- # local dev=/dev/mapper/nvme_dm_test mount=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount size= 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/common.sh@68 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/common.sh@70 -- # [[ -e /dev/mapper/nvme_dm_test ]] 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/common.sh@71 -- # mkfs.ext4 -qF /dev/mapper/nvme_dm_test 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/common.sh@72 -- # mount /dev/mapper/nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@174 -- # verify 0000:5e:00.0 nvme0n1:nvme_dm_test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=nvme0n1:nvme_dm_test 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:08.320 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:08.580 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@56 -- # : 00:04:08.580 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:08.580 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:08.580 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:08.580 11:49:55 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:08.580 11:49:55 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:08.580 11:49:55 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:11.125 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2,mount@nvme0n1:nvme_dm_test, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\n\v\m\e\0\n\1\:\n\v\m\e\_\d\m\_\t\e\s\t* ]] 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:11.126 11:49:57 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount ]] 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/devices.sh@71 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/devices.sh@73 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm ]] 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/devices.sh@74 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount/test_dm 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/devices.sh@182 -- # umount /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/devices.sh@184 -- # verify 0000:5e:00.0 holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 '' '' 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/devices.sh@48 -- # local dev=0000:5e:00.0 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/devices.sh@49 -- # local mounts=holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/devices.sh@50 -- # local mount_point= 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/devices.sh@51 -- # local test_file= 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/devices.sh@53 -- # local found=0 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/devices.sh@55 -- # [[ -n '' ]] 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/devices.sh@59 -- # local pci status 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # PCI_ALLOWED=0000:5e:00.0 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/devices.sh@47 -- # setup output config 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/common.sh@9 -- # [[ output == output ]] 00:04:11.126 11:49:58 setup.sh.devices.dm_mount -- setup/common.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh config 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:5e:00.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ Active devices: holder@nvme0n1p1:dm-2,holder@nvme0n1p2:dm-2, so not binding PCI dev == *\A\c\t\i\v\e\ \d\e\v\i\c\e\s\:\ *\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\1\:\d\m\-\2\,\h\o\l\d\e\r\@\n\v\m\e\0\n\1\p\2\:\d\m\-\2* ]] 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@63 -- # found=1 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:00:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.670 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.7 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.6 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.5 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.4 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.3 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.2 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.1 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@62 -- # [[ 0000:80:04.0 == \0\0\0\0\:\5\e\:\0\0\.\0 ]] 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@60 -- # read -r pci _ _ status 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@66 -- # (( found == 1 )) 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # [[ -n '' ]] 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@68 -- # return 0 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@187 -- # cleanup_dm 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@37 -- # dmsetup remove --force nvme_dm_test 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@40 -- # wipefs --all /dev/nvme0n1p1 00:04:13.671 /dev/nvme0n1p1: 2 bytes were erased at offset 0x00000438 (ext4): 53 ef 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- setup/devices.sh@43 -- # wipefs --all /dev/nvme0n1p2 00:04:13.671 00:04:13.671 real 0m8.256s 00:04:13.671 user 0m1.905s 00:04:13.671 sys 0m3.359s 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.671 11:50:00 setup.sh.devices.dm_mount -- common/autotest_common.sh@10 -- # set +x 00:04:13.671 ************************************ 00:04:13.671 END TEST dm_mount 00:04:13.671 ************************************ 00:04:13.671 11:50:00 setup.sh.devices -- common/autotest_common.sh@1142 -- # return 0 00:04:13.671 11:50:00 setup.sh.devices -- setup/devices.sh@1 -- # cleanup 00:04:13.671 11:50:00 setup.sh.devices -- setup/devices.sh@11 -- # cleanup_nvme 00:04:13.671 11:50:00 setup.sh.devices -- setup/devices.sh@20 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/nvme_mount 00:04:13.671 11:50:00 setup.sh.devices -- setup/devices.sh@24 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.671 11:50:00 setup.sh.devices -- setup/devices.sh@25 -- # wipefs --all /dev/nvme0n1p1 00:04:13.671 11:50:00 setup.sh.devices -- setup/devices.sh@27 -- # [[ -b /dev/nvme0n1 ]] 00:04:13.671 11:50:00 setup.sh.devices -- setup/devices.sh@28 -- # wipefs --all /dev/nvme0n1 00:04:13.930 /dev/nvme0n1: 8 bytes were erased at offset 0x00000200 (gpt): 45 46 49 20 50 41 52 54 00:04:13.930 /dev/nvme0n1: 8 bytes were erased at offset 0xe8e0db5e00 (gpt): 45 46 49 20 50 41 52 54 00:04:13.930 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:04:13.930 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:04:13.930 11:50:00 setup.sh.devices -- setup/devices.sh@12 -- # cleanup_dm 00:04:13.930 11:50:00 setup.sh.devices -- setup/devices.sh@33 -- # mountpoint -q /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/setup/dm_mount 00:04:13.930 11:50:00 setup.sh.devices -- setup/devices.sh@36 -- # [[ -L /dev/mapper/nvme_dm_test ]] 00:04:13.930 11:50:00 setup.sh.devices -- setup/devices.sh@39 -- # [[ -b /dev/nvme0n1p1 ]] 00:04:13.930 11:50:00 setup.sh.devices -- setup/devices.sh@42 -- # [[ -b /dev/nvme0n1p2 ]] 00:04:13.930 11:50:00 setup.sh.devices -- setup/devices.sh@14 -- # [[ -b /dev/nvme0n1 ]] 00:04:13.930 11:50:00 setup.sh.devices -- setup/devices.sh@15 -- # wipefs --all /dev/nvme0n1 00:04:13.930 00:04:13.930 real 0m21.696s 00:04:13.930 user 0m5.875s 00:04:13.930 sys 0m10.347s 00:04:13.930 11:50:00 setup.sh.devices -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.930 11:50:00 setup.sh.devices -- common/autotest_common.sh@10 -- # set +x 00:04:13.930 ************************************ 00:04:13.930 END TEST devices 00:04:13.930 ************************************ 00:04:13.930 11:50:00 setup.sh -- common/autotest_common.sh@1142 -- # return 0 00:04:13.930 00:04:13.930 real 1m12.338s 00:04:13.930 user 0m23.003s 00:04:13.930 sys 0m39.422s 00:04:13.930 11:50:00 setup.sh -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:13.930 11:50:00 setup.sh -- common/autotest_common.sh@10 -- # set +x 00:04:13.930 ************************************ 00:04:13.930 END TEST setup.sh 00:04:13.930 ************************************ 00:04:13.930 11:50:01 -- common/autotest_common.sh@1142 -- # return 0 00:04:13.930 11:50:01 -- spdk/autotest.sh@128 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh status 00:04:16.472 Hugepages 00:04:16.472 node hugesize free / total 00:04:16.472 node0 1048576kB 0 / 0 00:04:16.472 node0 2048kB 2048 / 2048 00:04:16.472 node1 1048576kB 0 / 0 00:04:16.472 node1 2048kB 0 / 0 00:04:16.472 00:04:16.472 Type BDF Vendor Device NUMA Driver Device Block devices 00:04:16.472 I/OAT 0000:00:04.0 8086 2021 0 ioatdma - - 00:04:16.472 I/OAT 0000:00:04.1 8086 2021 0 ioatdma - - 00:04:16.472 I/OAT 0000:00:04.2 8086 2021 0 ioatdma - - 00:04:16.472 I/OAT 0000:00:04.3 8086 2021 0 ioatdma - - 00:04:16.472 I/OAT 0000:00:04.4 8086 2021 0 ioatdma - - 00:04:16.472 I/OAT 0000:00:04.5 8086 2021 0 ioatdma - - 00:04:16.472 I/OAT 0000:00:04.6 8086 2021 0 ioatdma - - 00:04:16.472 I/OAT 0000:00:04.7 8086 2021 0 ioatdma - - 00:04:16.472 NVMe 0000:5e:00.0 8086 0a54 0 nvme nvme0 nvme0n1 00:04:16.472 I/OAT 0000:80:04.0 8086 2021 1 ioatdma - - 00:04:16.472 I/OAT 0000:80:04.1 8086 2021 1 ioatdma - - 00:04:16.472 I/OAT 0000:80:04.2 8086 2021 1 ioatdma - - 00:04:16.472 I/OAT 0000:80:04.3 8086 2021 1 ioatdma - - 00:04:16.472 I/OAT 0000:80:04.4 8086 2021 1 ioatdma - - 00:04:16.472 I/OAT 0000:80:04.5 8086 2021 1 ioatdma - - 00:04:16.472 I/OAT 0000:80:04.6 8086 2021 1 ioatdma - - 00:04:16.472 I/OAT 0000:80:04.7 8086 2021 1 ioatdma - - 00:04:16.472 11:50:03 -- spdk/autotest.sh@130 -- # uname -s 00:04:16.472 11:50:03 -- spdk/autotest.sh@130 -- # [[ Linux == Linux ]] 00:04:16.472 11:50:03 -- spdk/autotest.sh@132 -- # nvme_namespace_revert 00:04:16.472 11:50:03 -- common/autotest_common.sh@1531 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:19.015 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:19.015 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:19.015 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:19.015 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:19.015 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:19.015 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:19.015 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:19.015 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:19.015 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:19.015 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:19.015 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:19.015 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:19.015 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:19.015 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:19.015 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:19.015 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:19.956 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:19.956 11:50:07 -- common/autotest_common.sh@1532 -- # sleep 1 00:04:20.896 11:50:08 -- common/autotest_common.sh@1533 -- # bdfs=() 00:04:20.896 11:50:08 -- common/autotest_common.sh@1533 -- # local bdfs 00:04:20.896 11:50:08 -- common/autotest_common.sh@1534 -- # bdfs=($(get_nvme_bdfs)) 00:04:20.896 11:50:08 -- common/autotest_common.sh@1534 -- # get_nvme_bdfs 00:04:20.896 11:50:08 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:20.896 11:50:08 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:20.896 11:50:08 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:20.896 11:50:08 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:20.896 11:50:08 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:21.157 11:50:08 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:21.158 11:50:08 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:21.158 11:50:08 -- common/autotest_common.sh@1536 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:04:23.717 Waiting for block devices as requested 00:04:23.717 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:04:23.717 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:23.717 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:23.717 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:23.978 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:23.978 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:23.978 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:23.978 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:24.239 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:24.239 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:04:24.239 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:04:24.239 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:04:24.499 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:04:24.499 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:04:24.500 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:04:24.760 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:04:24.760 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:04:24.760 11:50:11 -- common/autotest_common.sh@1538 -- # for bdf in "${bdfs[@]}" 00:04:24.760 11:50:11 -- common/autotest_common.sh@1539 -- # get_nvme_ctrlr_from_bdf 0000:5e:00.0 00:04:24.760 11:50:11 -- common/autotest_common.sh@1502 -- # readlink -f /sys/class/nvme/nvme0 00:04:24.760 11:50:11 -- common/autotest_common.sh@1502 -- # grep 0000:5e:00.0/nvme/nvme 00:04:24.760 11:50:11 -- common/autotest_common.sh@1502 -- # bdf_sysfs_path=/sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:24.760 11:50:11 -- common/autotest_common.sh@1503 -- # [[ -z /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 ]] 00:04:24.760 11:50:11 -- common/autotest_common.sh@1507 -- # basename /sys/devices/pci0000:5d/0000:5d:02.0/0000:5e:00.0/nvme/nvme0 00:04:24.760 11:50:11 -- common/autotest_common.sh@1507 -- # printf '%s\n' nvme0 00:04:24.760 11:50:11 -- common/autotest_common.sh@1539 -- # nvme_ctrlr=/dev/nvme0 00:04:24.760 11:50:11 -- common/autotest_common.sh@1540 -- # [[ -z /dev/nvme0 ]] 00:04:24.760 11:50:11 -- common/autotest_common.sh@1545 -- # nvme id-ctrl /dev/nvme0 00:04:24.760 11:50:11 -- common/autotest_common.sh@1545 -- # grep oacs 00:04:24.760 11:50:11 -- common/autotest_common.sh@1545 -- # cut -d: -f2 00:04:24.760 11:50:11 -- common/autotest_common.sh@1545 -- # oacs=' 0xe' 00:04:24.760 11:50:11 -- common/autotest_common.sh@1546 -- # oacs_ns_manage=8 00:04:24.760 11:50:11 -- common/autotest_common.sh@1548 -- # [[ 8 -ne 0 ]] 00:04:24.760 11:50:11 -- common/autotest_common.sh@1554 -- # nvme id-ctrl /dev/nvme0 00:04:24.760 11:50:11 -- common/autotest_common.sh@1554 -- # grep unvmcap 00:04:24.760 11:50:11 -- common/autotest_common.sh@1554 -- # cut -d: -f2 00:04:24.760 11:50:11 -- common/autotest_common.sh@1554 -- # unvmcap=' 0' 00:04:24.760 11:50:11 -- common/autotest_common.sh@1555 -- # [[ 0 -eq 0 ]] 00:04:24.760 11:50:11 -- common/autotest_common.sh@1557 -- # continue 00:04:24.760 11:50:11 -- spdk/autotest.sh@135 -- # timing_exit pre_cleanup 00:04:24.760 11:50:11 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:24.760 11:50:11 -- common/autotest_common.sh@10 -- # set +x 00:04:24.760 11:50:11 -- spdk/autotest.sh@138 -- # timing_enter afterboot 00:04:24.760 11:50:11 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:24.760 11:50:11 -- common/autotest_common.sh@10 -- # set +x 00:04:24.760 11:50:11 -- spdk/autotest.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:04:27.302 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.302 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.302 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.302 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.302 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.302 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.302 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.302 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:27.302 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:04:27.302 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:04:27.302 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:04:27.302 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:04:27.302 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:04:27.302 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:04:27.302 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:04:27.302 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:04:28.243 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:04:28.243 11:50:15 -- spdk/autotest.sh@140 -- # timing_exit afterboot 00:04:28.243 11:50:15 -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:28.243 11:50:15 -- common/autotest_common.sh@10 -- # set +x 00:04:28.243 11:50:15 -- spdk/autotest.sh@144 -- # opal_revert_cleanup 00:04:28.243 11:50:15 -- common/autotest_common.sh@1591 -- # mapfile -t bdfs 00:04:28.243 11:50:15 -- common/autotest_common.sh@1591 -- # get_nvme_bdfs_by_id 0x0a54 00:04:28.243 11:50:15 -- common/autotest_common.sh@1577 -- # bdfs=() 00:04:28.243 11:50:15 -- common/autotest_common.sh@1577 -- # local bdfs 00:04:28.243 11:50:15 -- common/autotest_common.sh@1579 -- # get_nvme_bdfs 00:04:28.243 11:50:15 -- common/autotest_common.sh@1513 -- # bdfs=() 00:04:28.243 11:50:15 -- common/autotest_common.sh@1513 -- # local bdfs 00:04:28.243 11:50:15 -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:28.243 11:50:15 -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:04:28.243 11:50:15 -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:04:28.503 11:50:15 -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:04:28.503 11:50:15 -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:04:28.503 11:50:15 -- common/autotest_common.sh@1579 -- # for bdf in $(get_nvme_bdfs) 00:04:28.504 11:50:15 -- common/autotest_common.sh@1580 -- # cat /sys/bus/pci/devices/0000:5e:00.0/device 00:04:28.504 11:50:15 -- common/autotest_common.sh@1580 -- # device=0x0a54 00:04:28.504 11:50:15 -- common/autotest_common.sh@1581 -- # [[ 0x0a54 == \0\x\0\a\5\4 ]] 00:04:28.504 11:50:15 -- common/autotest_common.sh@1582 -- # bdfs+=($bdf) 00:04:28.504 11:50:15 -- common/autotest_common.sh@1586 -- # printf '%s\n' 0000:5e:00.0 00:04:28.504 11:50:15 -- common/autotest_common.sh@1592 -- # [[ -z 0000:5e:00.0 ]] 00:04:28.504 11:50:15 -- common/autotest_common.sh@1597 -- # spdk_tgt_pid=140494 00:04:28.504 11:50:15 -- common/autotest_common.sh@1598 -- # waitforlisten 140494 00:04:28.504 11:50:15 -- common/autotest_common.sh@1596 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:28.504 11:50:15 -- common/autotest_common.sh@829 -- # '[' -z 140494 ']' 00:04:28.504 11:50:15 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:28.504 11:50:15 -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:28.504 11:50:15 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:28.504 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:28.504 11:50:15 -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:28.504 11:50:15 -- common/autotest_common.sh@10 -- # set +x 00:04:28.504 [2024-07-25 11:50:15.600127] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:04:28.504 [2024-07-25 11:50:15.600181] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid140494 ] 00:04:28.504 EAL: No free 2048 kB hugepages reported on node 1 00:04:28.504 [2024-07-25 11:50:15.656124] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:28.504 [2024-07-25 11:50:15.734465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:29.444 11:50:16 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:29.444 11:50:16 -- common/autotest_common.sh@862 -- # return 0 00:04:29.444 11:50:16 -- common/autotest_common.sh@1600 -- # bdf_id=0 00:04:29.444 11:50:16 -- common/autotest_common.sh@1601 -- # for bdf in "${bdfs[@]}" 00:04:29.444 11:50:16 -- common/autotest_common.sh@1602 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t pcie -a 0000:5e:00.0 00:04:32.740 nvme0n1 00:04:32.740 11:50:19 -- common/autotest_common.sh@1604 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_nvme_opal_revert -b nvme0 -p test 00:04:32.740 [2024-07-25 11:50:19.536067] vbdev_opal_rpc.c: 125:rpc_bdev_nvme_opal_revert: *ERROR*: nvme0 not support opal 00:04:32.740 request: 00:04:32.740 { 00:04:32.740 "nvme_ctrlr_name": "nvme0", 00:04:32.740 "password": "test", 00:04:32.740 "method": "bdev_nvme_opal_revert", 00:04:32.740 "req_id": 1 00:04:32.740 } 00:04:32.740 Got JSON-RPC error response 00:04:32.740 response: 00:04:32.740 { 00:04:32.740 "code": -32602, 00:04:32.740 "message": "Invalid parameters" 00:04:32.740 } 00:04:32.740 11:50:19 -- common/autotest_common.sh@1604 -- # true 00:04:32.740 11:50:19 -- common/autotest_common.sh@1605 -- # (( ++bdf_id )) 00:04:32.740 11:50:19 -- common/autotest_common.sh@1608 -- # killprocess 140494 00:04:32.740 11:50:19 -- common/autotest_common.sh@948 -- # '[' -z 140494 ']' 00:04:32.740 11:50:19 -- common/autotest_common.sh@952 -- # kill -0 140494 00:04:32.740 11:50:19 -- common/autotest_common.sh@953 -- # uname 00:04:32.740 11:50:19 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:32.740 11:50:19 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 140494 00:04:32.740 11:50:19 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:32.740 11:50:19 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:32.740 11:50:19 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 140494' 00:04:32.740 killing process with pid 140494 00:04:32.740 11:50:19 -- common/autotest_common.sh@967 -- # kill 140494 00:04:32.740 11:50:19 -- common/autotest_common.sh@972 -- # wait 140494 00:04:34.120 11:50:21 -- spdk/autotest.sh@150 -- # '[' 0 -eq 1 ']' 00:04:34.120 11:50:21 -- spdk/autotest.sh@154 -- # '[' 1 -eq 1 ']' 00:04:34.120 11:50:21 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:34.120 11:50:21 -- spdk/autotest.sh@155 -- # [[ 0 -eq 1 ]] 00:04:34.120 11:50:21 -- spdk/autotest.sh@162 -- # timing_enter lib 00:04:34.120 11:50:21 -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:34.120 11:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:34.120 11:50:21 -- spdk/autotest.sh@164 -- # [[ 0 -eq 1 ]] 00:04:34.120 11:50:21 -- spdk/autotest.sh@168 -- # run_test env /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:34.120 11:50:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.120 11:50:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.120 11:50:21 -- common/autotest_common.sh@10 -- # set +x 00:04:34.120 ************************************ 00:04:34.120 START TEST env 00:04:34.120 ************************************ 00:04:34.120 11:50:21 env -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env.sh 00:04:34.120 * Looking for test storage... 00:04:34.120 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env 00:04:34.120 11:50:21 env -- env/env.sh@10 -- # run_test env_memory /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:34.120 11:50:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.120 11:50:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.120 11:50:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.121 ************************************ 00:04:34.121 START TEST env_memory 00:04:34.121 ************************************ 00:04:34.121 11:50:21 env.env_memory -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/memory/memory_ut 00:04:34.121 00:04:34.121 00:04:34.121 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.121 http://cunit.sourceforge.net/ 00:04:34.121 00:04:34.121 00:04:34.121 Suite: memory 00:04:34.121 Test: alloc and free memory map ...[2024-07-25 11:50:21.368126] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:34.381 passed 00:04:34.381 Test: mem map translation ...[2024-07-25 11:50:21.387273] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:34.381 [2024-07-25 11:50:21.387287] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 590:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:34.381 [2024-07-25 11:50:21.387324] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 584:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:34.381 [2024-07-25 11:50:21.387333] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 600:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:34.381 passed 00:04:34.381 Test: mem map registration ...[2024-07-25 11:50:21.424044] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x200000 len=1234 00:04:34.381 [2024-07-25 11:50:21.424058] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/memory.c: 346:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=0x4d2 len=2097152 00:04:34.381 passed 00:04:34.381 Test: mem map adjacent registrations ...passed 00:04:34.381 00:04:34.381 Run Summary: Type Total Ran Passed Failed Inactive 00:04:34.381 suites 1 1 n/a 0 0 00:04:34.381 tests 4 4 4 0 0 00:04:34.381 asserts 152 152 152 0 n/a 00:04:34.381 00:04:34.381 Elapsed time = 0.138 seconds 00:04:34.381 00:04:34.381 real 0m0.150s 00:04:34.381 user 0m0.138s 00:04:34.381 sys 0m0.012s 00:04:34.381 11:50:21 env.env_memory -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:34.381 11:50:21 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:34.381 ************************************ 00:04:34.381 END TEST env_memory 00:04:34.381 ************************************ 00:04:34.381 11:50:21 env -- common/autotest_common.sh@1142 -- # return 0 00:04:34.381 11:50:21 env -- env/env.sh@11 -- # run_test env_vtophys /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:34.381 11:50:21 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:34.381 11:50:21 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:34.381 11:50:21 env -- common/autotest_common.sh@10 -- # set +x 00:04:34.381 ************************************ 00:04:34.381 START TEST env_vtophys 00:04:34.381 ************************************ 00:04:34.381 11:50:21 env.env_vtophys -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/vtophys/vtophys 00:04:34.381 EAL: lib.eal log level changed from notice to debug 00:04:34.381 EAL: Detected lcore 0 as core 0 on socket 0 00:04:34.381 EAL: Detected lcore 1 as core 1 on socket 0 00:04:34.381 EAL: Detected lcore 2 as core 2 on socket 0 00:04:34.381 EAL: Detected lcore 3 as core 3 on socket 0 00:04:34.381 EAL: Detected lcore 4 as core 4 on socket 0 00:04:34.381 EAL: Detected lcore 5 as core 5 on socket 0 00:04:34.381 EAL: Detected lcore 6 as core 6 on socket 0 00:04:34.381 EAL: Detected lcore 7 as core 8 on socket 0 00:04:34.381 EAL: Detected lcore 8 as core 9 on socket 0 00:04:34.381 EAL: Detected lcore 9 as core 10 on socket 0 00:04:34.381 EAL: Detected lcore 10 as core 11 on socket 0 00:04:34.381 EAL: Detected lcore 11 as core 12 on socket 0 00:04:34.381 EAL: Detected lcore 12 as core 13 on socket 0 00:04:34.381 EAL: Detected lcore 13 as core 16 on socket 0 00:04:34.381 EAL: Detected lcore 14 as core 17 on socket 0 00:04:34.381 EAL: Detected lcore 15 as core 18 on socket 0 00:04:34.381 EAL: Detected lcore 16 as core 19 on socket 0 00:04:34.381 EAL: Detected lcore 17 as core 20 on socket 0 00:04:34.381 EAL: Detected lcore 18 as core 21 on socket 0 00:04:34.381 EAL: Detected lcore 19 as core 25 on socket 0 00:04:34.381 EAL: Detected lcore 20 as core 26 on socket 0 00:04:34.381 EAL: Detected lcore 21 as core 27 on socket 0 00:04:34.381 EAL: Detected lcore 22 as core 28 on socket 0 00:04:34.381 EAL: Detected lcore 23 as core 29 on socket 0 00:04:34.381 EAL: Detected lcore 24 as core 0 on socket 1 00:04:34.381 EAL: Detected lcore 25 as core 1 on socket 1 00:04:34.381 EAL: Detected lcore 26 as core 2 on socket 1 00:04:34.381 EAL: Detected lcore 27 as core 3 on socket 1 00:04:34.381 EAL: Detected lcore 28 as core 4 on socket 1 00:04:34.381 EAL: Detected lcore 29 as core 5 on socket 1 00:04:34.381 EAL: Detected lcore 30 as core 6 on socket 1 00:04:34.381 EAL: Detected lcore 31 as core 9 on socket 1 00:04:34.381 EAL: Detected lcore 32 as core 10 on socket 1 00:04:34.381 EAL: Detected lcore 33 as core 11 on socket 1 00:04:34.381 EAL: Detected lcore 34 as core 12 on socket 1 00:04:34.381 EAL: Detected lcore 35 as core 13 on socket 1 00:04:34.381 EAL: Detected lcore 36 as core 16 on socket 1 00:04:34.381 EAL: Detected lcore 37 as core 17 on socket 1 00:04:34.381 EAL: Detected lcore 38 as core 18 on socket 1 00:04:34.381 EAL: Detected lcore 39 as core 19 on socket 1 00:04:34.381 EAL: Detected lcore 40 as core 20 on socket 1 00:04:34.381 EAL: Detected lcore 41 as core 21 on socket 1 00:04:34.381 EAL: Detected lcore 42 as core 24 on socket 1 00:04:34.381 EAL: Detected lcore 43 as core 25 on socket 1 00:04:34.381 EAL: Detected lcore 44 as core 26 on socket 1 00:04:34.381 EAL: Detected lcore 45 as core 27 on socket 1 00:04:34.381 EAL: Detected lcore 46 as core 28 on socket 1 00:04:34.381 EAL: Detected lcore 47 as core 29 on socket 1 00:04:34.381 EAL: Detected lcore 48 as core 0 on socket 0 00:04:34.381 EAL: Detected lcore 49 as core 1 on socket 0 00:04:34.381 EAL: Detected lcore 50 as core 2 on socket 0 00:04:34.381 EAL: Detected lcore 51 as core 3 on socket 0 00:04:34.381 EAL: Detected lcore 52 as core 4 on socket 0 00:04:34.381 EAL: Detected lcore 53 as core 5 on socket 0 00:04:34.381 EAL: Detected lcore 54 as core 6 on socket 0 00:04:34.381 EAL: Detected lcore 55 as core 8 on socket 0 00:04:34.381 EAL: Detected lcore 56 as core 9 on socket 0 00:04:34.381 EAL: Detected lcore 57 as core 10 on socket 0 00:04:34.382 EAL: Detected lcore 58 as core 11 on socket 0 00:04:34.382 EAL: Detected lcore 59 as core 12 on socket 0 00:04:34.382 EAL: Detected lcore 60 as core 13 on socket 0 00:04:34.382 EAL: Detected lcore 61 as core 16 on socket 0 00:04:34.382 EAL: Detected lcore 62 as core 17 on socket 0 00:04:34.382 EAL: Detected lcore 63 as core 18 on socket 0 00:04:34.382 EAL: Detected lcore 64 as core 19 on socket 0 00:04:34.382 EAL: Detected lcore 65 as core 20 on socket 0 00:04:34.382 EAL: Detected lcore 66 as core 21 on socket 0 00:04:34.382 EAL: Detected lcore 67 as core 25 on socket 0 00:04:34.382 EAL: Detected lcore 68 as core 26 on socket 0 00:04:34.382 EAL: Detected lcore 69 as core 27 on socket 0 00:04:34.382 EAL: Detected lcore 70 as core 28 on socket 0 00:04:34.382 EAL: Detected lcore 71 as core 29 on socket 0 00:04:34.382 EAL: Detected lcore 72 as core 0 on socket 1 00:04:34.382 EAL: Detected lcore 73 as core 1 on socket 1 00:04:34.382 EAL: Detected lcore 74 as core 2 on socket 1 00:04:34.382 EAL: Detected lcore 75 as core 3 on socket 1 00:04:34.382 EAL: Detected lcore 76 as core 4 on socket 1 00:04:34.382 EAL: Detected lcore 77 as core 5 on socket 1 00:04:34.382 EAL: Detected lcore 78 as core 6 on socket 1 00:04:34.382 EAL: Detected lcore 79 as core 9 on socket 1 00:04:34.382 EAL: Detected lcore 80 as core 10 on socket 1 00:04:34.382 EAL: Detected lcore 81 as core 11 on socket 1 00:04:34.382 EAL: Detected lcore 82 as core 12 on socket 1 00:04:34.382 EAL: Detected lcore 83 as core 13 on socket 1 00:04:34.382 EAL: Detected lcore 84 as core 16 on socket 1 00:04:34.382 EAL: Detected lcore 85 as core 17 on socket 1 00:04:34.382 EAL: Detected lcore 86 as core 18 on socket 1 00:04:34.382 EAL: Detected lcore 87 as core 19 on socket 1 00:04:34.382 EAL: Detected lcore 88 as core 20 on socket 1 00:04:34.382 EAL: Detected lcore 89 as core 21 on socket 1 00:04:34.382 EAL: Detected lcore 90 as core 24 on socket 1 00:04:34.382 EAL: Detected lcore 91 as core 25 on socket 1 00:04:34.382 EAL: Detected lcore 92 as core 26 on socket 1 00:04:34.382 EAL: Detected lcore 93 as core 27 on socket 1 00:04:34.382 EAL: Detected lcore 94 as core 28 on socket 1 00:04:34.382 EAL: Detected lcore 95 as core 29 on socket 1 00:04:34.382 EAL: Maximum logical cores by configuration: 128 00:04:34.382 EAL: Detected CPU lcores: 96 00:04:34.382 EAL: Detected NUMA nodes: 2 00:04:34.382 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:34.382 EAL: Detected shared linkage of DPDK 00:04:34.382 EAL: No shared files mode enabled, IPC will be disabled 00:04:34.382 EAL: Bus pci wants IOVA as 'DC' 00:04:34.382 EAL: Buses did not request a specific IOVA mode. 00:04:34.382 EAL: IOMMU is available, selecting IOVA as VA mode. 00:04:34.382 EAL: Selected IOVA mode 'VA' 00:04:34.382 EAL: No free 2048 kB hugepages reported on node 1 00:04:34.382 EAL: Probing VFIO support... 00:04:34.382 EAL: IOMMU type 1 (Type 1) is supported 00:04:34.382 EAL: IOMMU type 7 (sPAPR) is not supported 00:04:34.382 EAL: IOMMU type 8 (No-IOMMU) is not supported 00:04:34.382 EAL: VFIO support initialized 00:04:34.382 EAL: Ask a virtual area of 0x2e000 bytes 00:04:34.382 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:34.382 EAL: Setting up physically contiguous memory... 00:04:34.382 EAL: Setting maximum number of open files to 524288 00:04:34.382 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:34.382 EAL: Detected memory type: socket_id:1 hugepage_sz:2097152 00:04:34.382 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:34.382 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.382 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:34.382 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.382 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.382 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:34.382 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:34.382 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.382 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:34.382 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.382 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.382 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:34.382 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:34.382 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.382 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:34.382 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.382 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.382 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:34.382 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:34.382 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.382 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:34.382 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:34.382 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.382 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:34.382 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:34.382 EAL: Creating 4 segment lists: n_segs:8192 socket_id:1 hugepage_sz:2097152 00:04:34.382 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.382 EAL: Virtual area found at 0x201000800000 (size = 0x61000) 00:04:34.382 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:34.382 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.382 EAL: Virtual area found at 0x201000a00000 (size = 0x400000000) 00:04:34.382 EAL: VA reserved for memseg list at 0x201000a00000, size 400000000 00:04:34.382 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.382 EAL: Virtual area found at 0x201400a00000 (size = 0x61000) 00:04:34.382 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:34.382 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.382 EAL: Virtual area found at 0x201400c00000 (size = 0x400000000) 00:04:34.382 EAL: VA reserved for memseg list at 0x201400c00000, size 400000000 00:04:34.382 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.382 EAL: Virtual area found at 0x201800c00000 (size = 0x61000) 00:04:34.382 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:34.382 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.382 EAL: Virtual area found at 0x201800e00000 (size = 0x400000000) 00:04:34.382 EAL: VA reserved for memseg list at 0x201800e00000, size 400000000 00:04:34.382 EAL: Ask a virtual area of 0x61000 bytes 00:04:34.382 EAL: Virtual area found at 0x201c00e00000 (size = 0x61000) 00:04:34.382 EAL: Memseg list allocated at socket 1, page size 0x800kB 00:04:34.382 EAL: Ask a virtual area of 0x400000000 bytes 00:04:34.382 EAL: Virtual area found at 0x201c01000000 (size = 0x400000000) 00:04:34.382 EAL: VA reserved for memseg list at 0x201c01000000, size 400000000 00:04:34.382 EAL: Hugepages will be freed exactly as allocated. 00:04:34.382 EAL: No shared files mode enabled, IPC is disabled 00:04:34.382 EAL: No shared files mode enabled, IPC is disabled 00:04:34.382 EAL: TSC frequency is ~2300000 KHz 00:04:34.382 EAL: Main lcore 0 is ready (tid=7f8a43513a00;cpuset=[0]) 00:04:34.382 EAL: Trying to obtain current memory policy. 00:04:34.382 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.382 EAL: Restoring previous memory policy: 0 00:04:34.382 EAL: request: mp_malloc_sync 00:04:34.382 EAL: No shared files mode enabled, IPC is disabled 00:04:34.382 EAL: Heap on socket 0 was expanded by 2MB 00:04:34.382 EAL: No shared files mode enabled, IPC is disabled 00:04:34.382 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:34.382 EAL: Mem event callback 'spdk:(nil)' registered 00:04:34.382 00:04:34.382 00:04:34.382 CUnit - A unit testing framework for C - Version 2.1-3 00:04:34.382 http://cunit.sourceforge.net/ 00:04:34.382 00:04:34.382 00:04:34.382 Suite: components_suite 00:04:34.382 Test: vtophys_malloc_test ...passed 00:04:34.382 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:34.382 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.382 EAL: Restoring previous memory policy: 4 00:04:34.382 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.382 EAL: request: mp_malloc_sync 00:04:34.382 EAL: No shared files mode enabled, IPC is disabled 00:04:34.382 EAL: Heap on socket 0 was expanded by 4MB 00:04:34.382 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.382 EAL: request: mp_malloc_sync 00:04:34.382 EAL: No shared files mode enabled, IPC is disabled 00:04:34.382 EAL: Heap on socket 0 was shrunk by 4MB 00:04:34.382 EAL: Trying to obtain current memory policy. 00:04:34.382 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.382 EAL: Restoring previous memory policy: 4 00:04:34.382 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.382 EAL: request: mp_malloc_sync 00:04:34.382 EAL: No shared files mode enabled, IPC is disabled 00:04:34.382 EAL: Heap on socket 0 was expanded by 6MB 00:04:34.382 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.382 EAL: request: mp_malloc_sync 00:04:34.382 EAL: No shared files mode enabled, IPC is disabled 00:04:34.382 EAL: Heap on socket 0 was shrunk by 6MB 00:04:34.382 EAL: Trying to obtain current memory policy. 00:04:34.382 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.382 EAL: Restoring previous memory policy: 4 00:04:34.382 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.382 EAL: request: mp_malloc_sync 00:04:34.382 EAL: No shared files mode enabled, IPC is disabled 00:04:34.382 EAL: Heap on socket 0 was expanded by 10MB 00:04:34.382 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.383 EAL: request: mp_malloc_sync 00:04:34.383 EAL: No shared files mode enabled, IPC is disabled 00:04:34.383 EAL: Heap on socket 0 was shrunk by 10MB 00:04:34.383 EAL: Trying to obtain current memory policy. 00:04:34.383 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.383 EAL: Restoring previous memory policy: 4 00:04:34.383 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.383 EAL: request: mp_malloc_sync 00:04:34.383 EAL: No shared files mode enabled, IPC is disabled 00:04:34.383 EAL: Heap on socket 0 was expanded by 18MB 00:04:34.383 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.383 EAL: request: mp_malloc_sync 00:04:34.383 EAL: No shared files mode enabled, IPC is disabled 00:04:34.383 EAL: Heap on socket 0 was shrunk by 18MB 00:04:34.383 EAL: Trying to obtain current memory policy. 00:04:34.383 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.383 EAL: Restoring previous memory policy: 4 00:04:34.383 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.383 EAL: request: mp_malloc_sync 00:04:34.383 EAL: No shared files mode enabled, IPC is disabled 00:04:34.383 EAL: Heap on socket 0 was expanded by 34MB 00:04:34.383 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.383 EAL: request: mp_malloc_sync 00:04:34.383 EAL: No shared files mode enabled, IPC is disabled 00:04:34.383 EAL: Heap on socket 0 was shrunk by 34MB 00:04:34.383 EAL: Trying to obtain current memory policy. 00:04:34.383 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.643 EAL: Restoring previous memory policy: 4 00:04:34.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.643 EAL: request: mp_malloc_sync 00:04:34.643 EAL: No shared files mode enabled, IPC is disabled 00:04:34.643 EAL: Heap on socket 0 was expanded by 66MB 00:04:34.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.643 EAL: request: mp_malloc_sync 00:04:34.643 EAL: No shared files mode enabled, IPC is disabled 00:04:34.643 EAL: Heap on socket 0 was shrunk by 66MB 00:04:34.643 EAL: Trying to obtain current memory policy. 00:04:34.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.643 EAL: Restoring previous memory policy: 4 00:04:34.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.643 EAL: request: mp_malloc_sync 00:04:34.643 EAL: No shared files mode enabled, IPC is disabled 00:04:34.643 EAL: Heap on socket 0 was expanded by 130MB 00:04:34.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.643 EAL: request: mp_malloc_sync 00:04:34.643 EAL: No shared files mode enabled, IPC is disabled 00:04:34.643 EAL: Heap on socket 0 was shrunk by 130MB 00:04:34.643 EAL: Trying to obtain current memory policy. 00:04:34.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.643 EAL: Restoring previous memory policy: 4 00:04:34.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.643 EAL: request: mp_malloc_sync 00:04:34.643 EAL: No shared files mode enabled, IPC is disabled 00:04:34.643 EAL: Heap on socket 0 was expanded by 258MB 00:04:34.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.643 EAL: request: mp_malloc_sync 00:04:34.643 EAL: No shared files mode enabled, IPC is disabled 00:04:34.643 EAL: Heap on socket 0 was shrunk by 258MB 00:04:34.643 EAL: Trying to obtain current memory policy. 00:04:34.643 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:34.902 EAL: Restoring previous memory policy: 4 00:04:34.902 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.902 EAL: request: mp_malloc_sync 00:04:34.902 EAL: No shared files mode enabled, IPC is disabled 00:04:34.902 EAL: Heap on socket 0 was expanded by 514MB 00:04:34.902 EAL: Calling mem event callback 'spdk:(nil)' 00:04:34.902 EAL: request: mp_malloc_sync 00:04:34.902 EAL: No shared files mode enabled, IPC is disabled 00:04:34.902 EAL: Heap on socket 0 was shrunk by 514MB 00:04:34.902 EAL: Trying to obtain current memory policy. 00:04:34.902 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:35.162 EAL: Restoring previous memory policy: 4 00:04:35.162 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.162 EAL: request: mp_malloc_sync 00:04:35.162 EAL: No shared files mode enabled, IPC is disabled 00:04:35.162 EAL: Heap on socket 0 was expanded by 1026MB 00:04:35.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.422 EAL: request: mp_malloc_sync 00:04:35.422 EAL: No shared files mode enabled, IPC is disabled 00:04:35.422 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:35.422 passed 00:04:35.422 00:04:35.422 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.422 suites 1 1 n/a 0 0 00:04:35.422 tests 2 2 2 0 0 00:04:35.422 asserts 497 497 497 0 n/a 00:04:35.422 00:04:35.422 Elapsed time = 0.962 seconds 00:04:35.422 EAL: Calling mem event callback 'spdk:(nil)' 00:04:35.422 EAL: request: mp_malloc_sync 00:04:35.422 EAL: No shared files mode enabled, IPC is disabled 00:04:35.422 EAL: Heap on socket 0 was shrunk by 2MB 00:04:35.422 EAL: No shared files mode enabled, IPC is disabled 00:04:35.422 EAL: No shared files mode enabled, IPC is disabled 00:04:35.422 EAL: No shared files mode enabled, IPC is disabled 00:04:35.422 00:04:35.422 real 0m1.078s 00:04:35.422 user 0m0.632s 00:04:35.422 sys 0m0.411s 00:04:35.422 11:50:22 env.env_vtophys -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.422 11:50:22 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:35.422 ************************************ 00:04:35.422 END TEST env_vtophys 00:04:35.422 ************************************ 00:04:35.422 11:50:22 env -- common/autotest_common.sh@1142 -- # return 0 00:04:35.422 11:50:22 env -- env/env.sh@12 -- # run_test env_pci /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:35.422 11:50:22 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:35.422 11:50:22 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.422 11:50:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.422 ************************************ 00:04:35.422 START TEST env_pci 00:04:35.422 ************************************ 00:04:35.422 11:50:22 env.env_pci -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/pci/pci_ut 00:04:35.682 00:04:35.682 00:04:35.682 CUnit - A unit testing framework for C - Version 2.1-3 00:04:35.682 http://cunit.sourceforge.net/ 00:04:35.682 00:04:35.682 00:04:35.682 Suite: pci 00:04:35.682 Test: pci_hook ...[2024-07-25 11:50:22.680512] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk/pci.c:1040:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 141855 has claimed it 00:04:35.682 EAL: Cannot find device (10000:00:01.0) 00:04:35.682 EAL: Failed to attach device on primary process 00:04:35.682 passed 00:04:35.682 00:04:35.682 Run Summary: Type Total Ran Passed Failed Inactive 00:04:35.682 suites 1 1 n/a 0 0 00:04:35.682 tests 1 1 1 0 0 00:04:35.682 asserts 25 25 25 0 n/a 00:04:35.682 00:04:35.682 Elapsed time = 0.024 seconds 00:04:35.682 00:04:35.682 real 0m0.042s 00:04:35.682 user 0m0.015s 00:04:35.682 sys 0m0.027s 00:04:35.682 11:50:22 env.env_pci -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:35.682 11:50:22 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:35.682 ************************************ 00:04:35.682 END TEST env_pci 00:04:35.682 ************************************ 00:04:35.682 11:50:22 env -- common/autotest_common.sh@1142 -- # return 0 00:04:35.682 11:50:22 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:35.682 11:50:22 env -- env/env.sh@15 -- # uname 00:04:35.682 11:50:22 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:35.682 11:50:22 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:35.682 11:50:22 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:35.682 11:50:22 env -- common/autotest_common.sh@1099 -- # '[' 5 -le 1 ']' 00:04:35.682 11:50:22 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:35.682 11:50:22 env -- common/autotest_common.sh@10 -- # set +x 00:04:35.682 ************************************ 00:04:35.682 START TEST env_dpdk_post_init 00:04:35.682 ************************************ 00:04:35.682 11:50:22 env.env_dpdk_post_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:35.682 EAL: Detected CPU lcores: 96 00:04:35.682 EAL: Detected NUMA nodes: 2 00:04:35.682 EAL: Detected shared linkage of DPDK 00:04:35.682 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:35.682 EAL: Selected IOVA mode 'VA' 00:04:35.682 EAL: No free 2048 kB hugepages reported on node 1 00:04:35.682 EAL: VFIO support initialized 00:04:35.682 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:35.682 EAL: Using IOMMU type 1 (Type 1) 00:04:35.682 EAL: Ignore mapping IO port bar(1) 00:04:35.682 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.0 (socket 0) 00:04:35.682 EAL: Ignore mapping IO port bar(1) 00:04:35.682 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.1 (socket 0) 00:04:35.682 EAL: Ignore mapping IO port bar(1) 00:04:35.682 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.2 (socket 0) 00:04:35.682 EAL: Ignore mapping IO port bar(1) 00:04:35.682 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.3 (socket 0) 00:04:35.942 EAL: Ignore mapping IO port bar(1) 00:04:35.942 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.4 (socket 0) 00:04:35.942 EAL: Ignore mapping IO port bar(1) 00:04:35.942 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.5 (socket 0) 00:04:35.942 EAL: Ignore mapping IO port bar(1) 00:04:35.942 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.6 (socket 0) 00:04:35.942 EAL: Ignore mapping IO port bar(1) 00:04:35.942 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:00:04.7 (socket 0) 00:04:36.511 EAL: Probe PCI driver: spdk_nvme (8086:0a54) device: 0000:5e:00.0 (socket 0) 00:04:36.511 EAL: Ignore mapping IO port bar(1) 00:04:36.511 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.0 (socket 1) 00:04:36.511 EAL: Ignore mapping IO port bar(1) 00:04:36.511 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.1 (socket 1) 00:04:36.511 EAL: Ignore mapping IO port bar(1) 00:04:36.511 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.2 (socket 1) 00:04:36.511 EAL: Ignore mapping IO port bar(1) 00:04:36.511 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.3 (socket 1) 00:04:36.770 EAL: Ignore mapping IO port bar(1) 00:04:36.770 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.4 (socket 1) 00:04:36.770 EAL: Ignore mapping IO port bar(1) 00:04:36.770 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.5 (socket 1) 00:04:36.770 EAL: Ignore mapping IO port bar(1) 00:04:36.770 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.6 (socket 1) 00:04:36.770 EAL: Ignore mapping IO port bar(1) 00:04:36.770 EAL: Probe PCI driver: spdk_ioat (8086:2021) device: 0000:80:04.7 (socket 1) 00:04:40.069 EAL: Releasing PCI mapped resource for 0000:5e:00.0 00:04:40.069 EAL: Calling pci_unmap_resource for 0000:5e:00.0 at 0x202001020000 00:04:40.069 Starting DPDK initialization... 00:04:40.069 Starting SPDK post initialization... 00:04:40.069 SPDK NVMe probe 00:04:40.069 Attaching to 0000:5e:00.0 00:04:40.069 Attached to 0000:5e:00.0 00:04:40.069 Cleaning up... 00:04:40.069 00:04:40.069 real 0m4.331s 00:04:40.069 user 0m3.284s 00:04:40.069 sys 0m0.122s 00:04:40.069 11:50:27 env.env_dpdk_post_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.069 11:50:27 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:40.069 ************************************ 00:04:40.069 END TEST env_dpdk_post_init 00:04:40.069 ************************************ 00:04:40.069 11:50:27 env -- common/autotest_common.sh@1142 -- # return 0 00:04:40.069 11:50:27 env -- env/env.sh@26 -- # uname 00:04:40.069 11:50:27 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:40.069 11:50:27 env -- env/env.sh@29 -- # run_test env_mem_callbacks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:40.069 11:50:27 env -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.069 11:50:27 env -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.069 11:50:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:40.069 ************************************ 00:04:40.069 START TEST env_mem_callbacks 00:04:40.069 ************************************ 00:04:40.069 11:50:27 env.env_mem_callbacks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/env/mem_callbacks/mem_callbacks 00:04:40.069 EAL: Detected CPU lcores: 96 00:04:40.069 EAL: Detected NUMA nodes: 2 00:04:40.069 EAL: Detected shared linkage of DPDK 00:04:40.069 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:40.069 EAL: Selected IOVA mode 'VA' 00:04:40.069 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.069 EAL: VFIO support initialized 00:04:40.069 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:40.069 00:04:40.069 00:04:40.069 CUnit - A unit testing framework for C - Version 2.1-3 00:04:40.069 http://cunit.sourceforge.net/ 00:04:40.069 00:04:40.069 00:04:40.069 Suite: memory 00:04:40.069 Test: test ... 00:04:40.069 register 0x200000200000 2097152 00:04:40.069 malloc 3145728 00:04:40.069 register 0x200000400000 4194304 00:04:40.069 buf 0x200000500000 len 3145728 PASSED 00:04:40.069 malloc 64 00:04:40.069 buf 0x2000004fff40 len 64 PASSED 00:04:40.069 malloc 4194304 00:04:40.069 register 0x200000800000 6291456 00:04:40.069 buf 0x200000a00000 len 4194304 PASSED 00:04:40.069 free 0x200000500000 3145728 00:04:40.069 free 0x2000004fff40 64 00:04:40.069 unregister 0x200000400000 4194304 PASSED 00:04:40.069 free 0x200000a00000 4194304 00:04:40.069 unregister 0x200000800000 6291456 PASSED 00:04:40.069 malloc 8388608 00:04:40.069 register 0x200000400000 10485760 00:04:40.069 buf 0x200000600000 len 8388608 PASSED 00:04:40.069 free 0x200000600000 8388608 00:04:40.069 unregister 0x200000400000 10485760 PASSED 00:04:40.069 passed 00:04:40.069 00:04:40.069 Run Summary: Type Total Ran Passed Failed Inactive 00:04:40.069 suites 1 1 n/a 0 0 00:04:40.069 tests 1 1 1 0 0 00:04:40.069 asserts 15 15 15 0 n/a 00:04:40.069 00:04:40.069 Elapsed time = 0.005 seconds 00:04:40.069 00:04:40.069 real 0m0.038s 00:04:40.069 user 0m0.012s 00:04:40.069 sys 0m0.026s 00:04:40.069 11:50:27 env.env_mem_callbacks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.069 11:50:27 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:40.069 ************************************ 00:04:40.069 END TEST env_mem_callbacks 00:04:40.069 ************************************ 00:04:40.069 11:50:27 env -- common/autotest_common.sh@1142 -- # return 0 00:04:40.069 00:04:40.069 real 0m6.011s 00:04:40.069 user 0m4.227s 00:04:40.069 sys 0m0.848s 00:04:40.069 11:50:27 env -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:40.069 11:50:27 env -- common/autotest_common.sh@10 -- # set +x 00:04:40.069 ************************************ 00:04:40.069 END TEST env 00:04:40.069 ************************************ 00:04:40.069 11:50:27 -- common/autotest_common.sh@1142 -- # return 0 00:04:40.069 11:50:27 -- spdk/autotest.sh@169 -- # run_test rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:40.069 11:50:27 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:40.069 11:50:27 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:40.069 11:50:27 -- common/autotest_common.sh@10 -- # set +x 00:04:40.069 ************************************ 00:04:40.069 START TEST rpc 00:04:40.069 ************************************ 00:04:40.069 11:50:27 rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/rpc.sh 00:04:40.329 * Looking for test storage... 00:04:40.329 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:40.329 11:50:27 rpc -- rpc/rpc.sh@65 -- # spdk_pid=142677 00:04:40.329 11:50:27 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:40.329 11:50:27 rpc -- rpc/rpc.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -e bdev 00:04:40.329 11:50:27 rpc -- rpc/rpc.sh@67 -- # waitforlisten 142677 00:04:40.329 11:50:27 rpc -- common/autotest_common.sh@829 -- # '[' -z 142677 ']' 00:04:40.329 11:50:27 rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.329 11:50:27 rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:40.329 11:50:27 rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.329 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.329 11:50:27 rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:40.329 11:50:27 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.329 [2024-07-25 11:50:27.425899] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:04:40.329 [2024-07-25 11:50:27.425946] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid142677 ] 00:04:40.329 EAL: No free 2048 kB hugepages reported on node 1 00:04:40.329 [2024-07-25 11:50:27.480290] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.329 [2024-07-25 11:50:27.553117] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:40.329 [2024-07-25 11:50:27.553159] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 142677' to capture a snapshot of events at runtime. 00:04:40.329 [2024-07-25 11:50:27.553166] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:40.329 [2024-07-25 11:50:27.553173] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:40.329 [2024-07-25 11:50:27.553178] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid142677 for offline analysis/debug. 00:04:40.329 [2024-07-25 11:50:27.553198] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:41.270 11:50:28 rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:41.270 11:50:28 rpc -- common/autotest_common.sh@862 -- # return 0 00:04:41.270 11:50:28 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:41.270 11:50:28 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:41.270 11:50:28 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:41.270 11:50:28 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:41.270 11:50:28 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.270 11:50:28 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.270 11:50:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.270 ************************************ 00:04:41.270 START TEST rpc_integrity 00:04:41.270 ************************************ 00:04:41.270 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:41.270 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:41.270 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.270 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.270 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.270 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:41.270 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:41.270 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:41.270 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:41.270 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.270 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.270 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.270 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:41.270 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:41.270 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.270 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.270 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.270 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:41.270 { 00:04:41.270 "name": "Malloc0", 00:04:41.270 "aliases": [ 00:04:41.270 "cbf686ec-1d58-4f47-9310-25044fe5e7d1" 00:04:41.270 ], 00:04:41.270 "product_name": "Malloc disk", 00:04:41.270 "block_size": 512, 00:04:41.270 "num_blocks": 16384, 00:04:41.270 "uuid": "cbf686ec-1d58-4f47-9310-25044fe5e7d1", 00:04:41.270 "assigned_rate_limits": { 00:04:41.270 "rw_ios_per_sec": 0, 00:04:41.270 "rw_mbytes_per_sec": 0, 00:04:41.270 "r_mbytes_per_sec": 0, 00:04:41.270 "w_mbytes_per_sec": 0 00:04:41.270 }, 00:04:41.270 "claimed": false, 00:04:41.270 "zoned": false, 00:04:41.270 "supported_io_types": { 00:04:41.270 "read": true, 00:04:41.270 "write": true, 00:04:41.270 "unmap": true, 00:04:41.270 "flush": true, 00:04:41.270 "reset": true, 00:04:41.270 "nvme_admin": false, 00:04:41.270 "nvme_io": false, 00:04:41.270 "nvme_io_md": false, 00:04:41.270 "write_zeroes": true, 00:04:41.270 "zcopy": true, 00:04:41.270 "get_zone_info": false, 00:04:41.270 "zone_management": false, 00:04:41.270 "zone_append": false, 00:04:41.270 "compare": false, 00:04:41.270 "compare_and_write": false, 00:04:41.270 "abort": true, 00:04:41.270 "seek_hole": false, 00:04:41.270 "seek_data": false, 00:04:41.270 "copy": true, 00:04:41.270 "nvme_iov_md": false 00:04:41.270 }, 00:04:41.270 "memory_domains": [ 00:04:41.270 { 00:04:41.270 "dma_device_id": "system", 00:04:41.270 "dma_device_type": 1 00:04:41.270 }, 00:04:41.270 { 00:04:41.270 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.270 "dma_device_type": 2 00:04:41.270 } 00:04:41.270 ], 00:04:41.270 "driver_specific": {} 00:04:41.270 } 00:04:41.270 ]' 00:04:41.270 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:41.270 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:41.270 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:41.270 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.270 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.270 [2024-07-25 11:50:28.379027] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:41.270 [2024-07-25 11:50:28.379062] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:41.270 [2024-07-25 11:50:28.379075] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x6eb2d0 00:04:41.270 [2024-07-25 11:50:28.379081] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:41.270 [2024-07-25 11:50:28.380192] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:41.270 [2024-07-25 11:50:28.380215] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:41.270 Passthru0 00:04:41.270 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.270 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:41.270 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.270 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.270 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.270 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:41.270 { 00:04:41.270 "name": "Malloc0", 00:04:41.270 "aliases": [ 00:04:41.270 "cbf686ec-1d58-4f47-9310-25044fe5e7d1" 00:04:41.270 ], 00:04:41.270 "product_name": "Malloc disk", 00:04:41.270 "block_size": 512, 00:04:41.270 "num_blocks": 16384, 00:04:41.270 "uuid": "cbf686ec-1d58-4f47-9310-25044fe5e7d1", 00:04:41.270 "assigned_rate_limits": { 00:04:41.270 "rw_ios_per_sec": 0, 00:04:41.271 "rw_mbytes_per_sec": 0, 00:04:41.271 "r_mbytes_per_sec": 0, 00:04:41.271 "w_mbytes_per_sec": 0 00:04:41.271 }, 00:04:41.271 "claimed": true, 00:04:41.271 "claim_type": "exclusive_write", 00:04:41.271 "zoned": false, 00:04:41.271 "supported_io_types": { 00:04:41.271 "read": true, 00:04:41.271 "write": true, 00:04:41.271 "unmap": true, 00:04:41.271 "flush": true, 00:04:41.271 "reset": true, 00:04:41.271 "nvme_admin": false, 00:04:41.271 "nvme_io": false, 00:04:41.271 "nvme_io_md": false, 00:04:41.271 "write_zeroes": true, 00:04:41.271 "zcopy": true, 00:04:41.271 "get_zone_info": false, 00:04:41.271 "zone_management": false, 00:04:41.271 "zone_append": false, 00:04:41.271 "compare": false, 00:04:41.271 "compare_and_write": false, 00:04:41.271 "abort": true, 00:04:41.271 "seek_hole": false, 00:04:41.271 "seek_data": false, 00:04:41.271 "copy": true, 00:04:41.271 "nvme_iov_md": false 00:04:41.271 }, 00:04:41.271 "memory_domains": [ 00:04:41.271 { 00:04:41.271 "dma_device_id": "system", 00:04:41.271 "dma_device_type": 1 00:04:41.271 }, 00:04:41.271 { 00:04:41.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.271 "dma_device_type": 2 00:04:41.271 } 00:04:41.271 ], 00:04:41.271 "driver_specific": {} 00:04:41.271 }, 00:04:41.271 { 00:04:41.271 "name": "Passthru0", 00:04:41.271 "aliases": [ 00:04:41.271 "6c604b12-2b52-5e7c-b36d-cb2dcef1bea2" 00:04:41.271 ], 00:04:41.271 "product_name": "passthru", 00:04:41.271 "block_size": 512, 00:04:41.271 "num_blocks": 16384, 00:04:41.271 "uuid": "6c604b12-2b52-5e7c-b36d-cb2dcef1bea2", 00:04:41.271 "assigned_rate_limits": { 00:04:41.271 "rw_ios_per_sec": 0, 00:04:41.271 "rw_mbytes_per_sec": 0, 00:04:41.271 "r_mbytes_per_sec": 0, 00:04:41.271 "w_mbytes_per_sec": 0 00:04:41.271 }, 00:04:41.271 "claimed": false, 00:04:41.271 "zoned": false, 00:04:41.271 "supported_io_types": { 00:04:41.271 "read": true, 00:04:41.271 "write": true, 00:04:41.271 "unmap": true, 00:04:41.271 "flush": true, 00:04:41.271 "reset": true, 00:04:41.271 "nvme_admin": false, 00:04:41.271 "nvme_io": false, 00:04:41.271 "nvme_io_md": false, 00:04:41.271 "write_zeroes": true, 00:04:41.271 "zcopy": true, 00:04:41.271 "get_zone_info": false, 00:04:41.271 "zone_management": false, 00:04:41.271 "zone_append": false, 00:04:41.271 "compare": false, 00:04:41.271 "compare_and_write": false, 00:04:41.271 "abort": true, 00:04:41.271 "seek_hole": false, 00:04:41.271 "seek_data": false, 00:04:41.271 "copy": true, 00:04:41.271 "nvme_iov_md": false 00:04:41.271 }, 00:04:41.271 "memory_domains": [ 00:04:41.271 { 00:04:41.271 "dma_device_id": "system", 00:04:41.271 "dma_device_type": 1 00:04:41.271 }, 00:04:41.271 { 00:04:41.271 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.271 "dma_device_type": 2 00:04:41.271 } 00:04:41.271 ], 00:04:41.271 "driver_specific": { 00:04:41.271 "passthru": { 00:04:41.271 "name": "Passthru0", 00:04:41.271 "base_bdev_name": "Malloc0" 00:04:41.271 } 00:04:41.271 } 00:04:41.271 } 00:04:41.271 ]' 00:04:41.271 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:41.271 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:41.271 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:41.271 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.271 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.271 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.271 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:41.271 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.271 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.271 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.271 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:41.271 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.271 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.271 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.271 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:41.271 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:41.271 11:50:28 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:41.271 00:04:41.271 real 0m0.270s 00:04:41.271 user 0m0.179s 00:04:41.271 sys 0m0.037s 00:04:41.271 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.271 11:50:28 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:41.271 ************************************ 00:04:41.271 END TEST rpc_integrity 00:04:41.271 ************************************ 00:04:41.531 11:50:28 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:41.531 11:50:28 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:41.531 11:50:28 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.531 11:50:28 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.531 11:50:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.531 ************************************ 00:04:41.531 START TEST rpc_plugins 00:04:41.531 ************************************ 00:04:41.531 11:50:28 rpc.rpc_plugins -- common/autotest_common.sh@1123 -- # rpc_plugins 00:04:41.531 11:50:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:41.531 11:50:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.531 11:50:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.531 11:50:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.531 11:50:28 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:41.531 11:50:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:41.531 11:50:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.531 11:50:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.531 11:50:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.531 11:50:28 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:41.531 { 00:04:41.531 "name": "Malloc1", 00:04:41.531 "aliases": [ 00:04:41.531 "ef7598a3-8d56-4662-b468-d25785362582" 00:04:41.531 ], 00:04:41.531 "product_name": "Malloc disk", 00:04:41.531 "block_size": 4096, 00:04:41.531 "num_blocks": 256, 00:04:41.531 "uuid": "ef7598a3-8d56-4662-b468-d25785362582", 00:04:41.531 "assigned_rate_limits": { 00:04:41.531 "rw_ios_per_sec": 0, 00:04:41.531 "rw_mbytes_per_sec": 0, 00:04:41.531 "r_mbytes_per_sec": 0, 00:04:41.531 "w_mbytes_per_sec": 0 00:04:41.531 }, 00:04:41.531 "claimed": false, 00:04:41.531 "zoned": false, 00:04:41.531 "supported_io_types": { 00:04:41.531 "read": true, 00:04:41.531 "write": true, 00:04:41.531 "unmap": true, 00:04:41.531 "flush": true, 00:04:41.531 "reset": true, 00:04:41.531 "nvme_admin": false, 00:04:41.531 "nvme_io": false, 00:04:41.531 "nvme_io_md": false, 00:04:41.531 "write_zeroes": true, 00:04:41.531 "zcopy": true, 00:04:41.531 "get_zone_info": false, 00:04:41.531 "zone_management": false, 00:04:41.531 "zone_append": false, 00:04:41.531 "compare": false, 00:04:41.531 "compare_and_write": false, 00:04:41.531 "abort": true, 00:04:41.531 "seek_hole": false, 00:04:41.531 "seek_data": false, 00:04:41.531 "copy": true, 00:04:41.531 "nvme_iov_md": false 00:04:41.531 }, 00:04:41.531 "memory_domains": [ 00:04:41.531 { 00:04:41.531 "dma_device_id": "system", 00:04:41.531 "dma_device_type": 1 00:04:41.531 }, 00:04:41.531 { 00:04:41.531 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:41.531 "dma_device_type": 2 00:04:41.531 } 00:04:41.531 ], 00:04:41.531 "driver_specific": {} 00:04:41.531 } 00:04:41.531 ]' 00:04:41.531 11:50:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:41.531 11:50:28 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:41.531 11:50:28 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:41.531 11:50:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.531 11:50:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.531 11:50:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.531 11:50:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:41.531 11:50:28 rpc.rpc_plugins -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.531 11:50:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.531 11:50:28 rpc.rpc_plugins -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.531 11:50:28 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:41.531 11:50:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:41.531 11:50:28 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:41.531 00:04:41.531 real 0m0.138s 00:04:41.531 user 0m0.091s 00:04:41.531 sys 0m0.016s 00:04:41.531 11:50:28 rpc.rpc_plugins -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.531 11:50:28 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:41.532 ************************************ 00:04:41.532 END TEST rpc_plugins 00:04:41.532 ************************************ 00:04:41.532 11:50:28 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:41.532 11:50:28 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:41.532 11:50:28 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.532 11:50:28 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.532 11:50:28 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:41.532 ************************************ 00:04:41.532 START TEST rpc_trace_cmd_test 00:04:41.532 ************************************ 00:04:41.822 11:50:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1123 -- # rpc_trace_cmd_test 00:04:41.822 11:50:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:41.822 11:50:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:41.822 11:50:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:41.822 11:50:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:41.822 11:50:28 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:41.822 11:50:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:41.822 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid142677", 00:04:41.822 "tpoint_group_mask": "0x8", 00:04:41.822 "iscsi_conn": { 00:04:41.822 "mask": "0x2", 00:04:41.822 "tpoint_mask": "0x0" 00:04:41.822 }, 00:04:41.822 "scsi": { 00:04:41.822 "mask": "0x4", 00:04:41.822 "tpoint_mask": "0x0" 00:04:41.822 }, 00:04:41.822 "bdev": { 00:04:41.822 "mask": "0x8", 00:04:41.822 "tpoint_mask": "0xffffffffffffffff" 00:04:41.822 }, 00:04:41.822 "nvmf_rdma": { 00:04:41.822 "mask": "0x10", 00:04:41.822 "tpoint_mask": "0x0" 00:04:41.822 }, 00:04:41.822 "nvmf_tcp": { 00:04:41.822 "mask": "0x20", 00:04:41.822 "tpoint_mask": "0x0" 00:04:41.822 }, 00:04:41.822 "ftl": { 00:04:41.822 "mask": "0x40", 00:04:41.822 "tpoint_mask": "0x0" 00:04:41.822 }, 00:04:41.822 "blobfs": { 00:04:41.822 "mask": "0x80", 00:04:41.822 "tpoint_mask": "0x0" 00:04:41.822 }, 00:04:41.822 "dsa": { 00:04:41.822 "mask": "0x200", 00:04:41.822 "tpoint_mask": "0x0" 00:04:41.822 }, 00:04:41.822 "thread": { 00:04:41.822 "mask": "0x400", 00:04:41.822 "tpoint_mask": "0x0" 00:04:41.822 }, 00:04:41.822 "nvme_pcie": { 00:04:41.822 "mask": "0x800", 00:04:41.822 "tpoint_mask": "0x0" 00:04:41.822 }, 00:04:41.822 "iaa": { 00:04:41.822 "mask": "0x1000", 00:04:41.822 "tpoint_mask": "0x0" 00:04:41.822 }, 00:04:41.822 "nvme_tcp": { 00:04:41.822 "mask": "0x2000", 00:04:41.822 "tpoint_mask": "0x0" 00:04:41.822 }, 00:04:41.822 "bdev_nvme": { 00:04:41.822 "mask": "0x4000", 00:04:41.822 "tpoint_mask": "0x0" 00:04:41.822 }, 00:04:41.822 "sock": { 00:04:41.822 "mask": "0x8000", 00:04:41.822 "tpoint_mask": "0x0" 00:04:41.822 } 00:04:41.822 }' 00:04:41.822 11:50:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:41.822 11:50:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 16 -gt 2 ']' 00:04:41.822 11:50:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:41.822 11:50:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:41.822 11:50:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:41.822 11:50:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:41.822 11:50:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:41.822 11:50:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:41.822 11:50:28 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:41.822 11:50:29 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:41.822 00:04:41.822 real 0m0.223s 00:04:41.822 user 0m0.197s 00:04:41.822 sys 0m0.018s 00:04:41.822 11:50:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:41.822 11:50:29 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:41.822 ************************************ 00:04:41.822 END TEST rpc_trace_cmd_test 00:04:41.822 ************************************ 00:04:41.822 11:50:29 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:41.822 11:50:29 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:41.822 11:50:29 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:41.822 11:50:29 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:41.822 11:50:29 rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:41.822 11:50:29 rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:41.822 11:50:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.092 ************************************ 00:04:42.092 START TEST rpc_daemon_integrity 00:04:42.092 ************************************ 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1123 -- # rpc_integrity 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:42.092 { 00:04:42.092 "name": "Malloc2", 00:04:42.092 "aliases": [ 00:04:42.092 "53c54713-477b-41ab-bdf0-2aafb6539f07" 00:04:42.092 ], 00:04:42.092 "product_name": "Malloc disk", 00:04:42.092 "block_size": 512, 00:04:42.092 "num_blocks": 16384, 00:04:42.092 "uuid": "53c54713-477b-41ab-bdf0-2aafb6539f07", 00:04:42.092 "assigned_rate_limits": { 00:04:42.092 "rw_ios_per_sec": 0, 00:04:42.092 "rw_mbytes_per_sec": 0, 00:04:42.092 "r_mbytes_per_sec": 0, 00:04:42.092 "w_mbytes_per_sec": 0 00:04:42.092 }, 00:04:42.092 "claimed": false, 00:04:42.092 "zoned": false, 00:04:42.092 "supported_io_types": { 00:04:42.092 "read": true, 00:04:42.092 "write": true, 00:04:42.092 "unmap": true, 00:04:42.092 "flush": true, 00:04:42.092 "reset": true, 00:04:42.092 "nvme_admin": false, 00:04:42.092 "nvme_io": false, 00:04:42.092 "nvme_io_md": false, 00:04:42.092 "write_zeroes": true, 00:04:42.092 "zcopy": true, 00:04:42.092 "get_zone_info": false, 00:04:42.092 "zone_management": false, 00:04:42.092 "zone_append": false, 00:04:42.092 "compare": false, 00:04:42.092 "compare_and_write": false, 00:04:42.092 "abort": true, 00:04:42.092 "seek_hole": false, 00:04:42.092 "seek_data": false, 00:04:42.092 "copy": true, 00:04:42.092 "nvme_iov_md": false 00:04:42.092 }, 00:04:42.092 "memory_domains": [ 00:04:42.092 { 00:04:42.092 "dma_device_id": "system", 00:04:42.092 "dma_device_type": 1 00:04:42.092 }, 00:04:42.092 { 00:04:42.092 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.092 "dma_device_type": 2 00:04:42.092 } 00:04:42.092 ], 00:04:42.092 "driver_specific": {} 00:04:42.092 } 00:04:42.092 ]' 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.092 [2024-07-25 11:50:29.189228] vbdev_passthru.c: 607:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:42.092 [2024-07-25 11:50:29.189258] vbdev_passthru.c: 635:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:42.092 [2024-07-25 11:50:29.189272] vbdev_passthru.c: 681:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x882ac0 00:04:42.092 [2024-07-25 11:50:29.189279] vbdev_passthru.c: 696:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:42.092 [2024-07-25 11:50:29.190263] vbdev_passthru.c: 709:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:42.092 [2024-07-25 11:50:29.190284] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:42.092 Passthru0 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.092 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:42.092 { 00:04:42.092 "name": "Malloc2", 00:04:42.092 "aliases": [ 00:04:42.092 "53c54713-477b-41ab-bdf0-2aafb6539f07" 00:04:42.092 ], 00:04:42.092 "product_name": "Malloc disk", 00:04:42.092 "block_size": 512, 00:04:42.092 "num_blocks": 16384, 00:04:42.092 "uuid": "53c54713-477b-41ab-bdf0-2aafb6539f07", 00:04:42.092 "assigned_rate_limits": { 00:04:42.092 "rw_ios_per_sec": 0, 00:04:42.092 "rw_mbytes_per_sec": 0, 00:04:42.092 "r_mbytes_per_sec": 0, 00:04:42.092 "w_mbytes_per_sec": 0 00:04:42.092 }, 00:04:42.092 "claimed": true, 00:04:42.092 "claim_type": "exclusive_write", 00:04:42.092 "zoned": false, 00:04:42.092 "supported_io_types": { 00:04:42.092 "read": true, 00:04:42.092 "write": true, 00:04:42.092 "unmap": true, 00:04:42.092 "flush": true, 00:04:42.092 "reset": true, 00:04:42.092 "nvme_admin": false, 00:04:42.092 "nvme_io": false, 00:04:42.092 "nvme_io_md": false, 00:04:42.092 "write_zeroes": true, 00:04:42.093 "zcopy": true, 00:04:42.093 "get_zone_info": false, 00:04:42.093 "zone_management": false, 00:04:42.093 "zone_append": false, 00:04:42.093 "compare": false, 00:04:42.093 "compare_and_write": false, 00:04:42.093 "abort": true, 00:04:42.093 "seek_hole": false, 00:04:42.093 "seek_data": false, 00:04:42.093 "copy": true, 00:04:42.093 "nvme_iov_md": false 00:04:42.093 }, 00:04:42.093 "memory_domains": [ 00:04:42.093 { 00:04:42.093 "dma_device_id": "system", 00:04:42.093 "dma_device_type": 1 00:04:42.093 }, 00:04:42.093 { 00:04:42.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.093 "dma_device_type": 2 00:04:42.093 } 00:04:42.093 ], 00:04:42.093 "driver_specific": {} 00:04:42.093 }, 00:04:42.093 { 00:04:42.093 "name": "Passthru0", 00:04:42.093 "aliases": [ 00:04:42.093 "5f0bd0a2-357c-5419-b602-bbfd17190371" 00:04:42.093 ], 00:04:42.093 "product_name": "passthru", 00:04:42.093 "block_size": 512, 00:04:42.093 "num_blocks": 16384, 00:04:42.093 "uuid": "5f0bd0a2-357c-5419-b602-bbfd17190371", 00:04:42.093 "assigned_rate_limits": { 00:04:42.093 "rw_ios_per_sec": 0, 00:04:42.093 "rw_mbytes_per_sec": 0, 00:04:42.093 "r_mbytes_per_sec": 0, 00:04:42.093 "w_mbytes_per_sec": 0 00:04:42.093 }, 00:04:42.093 "claimed": false, 00:04:42.093 "zoned": false, 00:04:42.093 "supported_io_types": { 00:04:42.093 "read": true, 00:04:42.093 "write": true, 00:04:42.093 "unmap": true, 00:04:42.093 "flush": true, 00:04:42.093 "reset": true, 00:04:42.093 "nvme_admin": false, 00:04:42.093 "nvme_io": false, 00:04:42.093 "nvme_io_md": false, 00:04:42.093 "write_zeroes": true, 00:04:42.093 "zcopy": true, 00:04:42.093 "get_zone_info": false, 00:04:42.093 "zone_management": false, 00:04:42.093 "zone_append": false, 00:04:42.093 "compare": false, 00:04:42.093 "compare_and_write": false, 00:04:42.093 "abort": true, 00:04:42.093 "seek_hole": false, 00:04:42.093 "seek_data": false, 00:04:42.093 "copy": true, 00:04:42.093 "nvme_iov_md": false 00:04:42.093 }, 00:04:42.093 "memory_domains": [ 00:04:42.093 { 00:04:42.093 "dma_device_id": "system", 00:04:42.093 "dma_device_type": 1 00:04:42.093 }, 00:04:42.093 { 00:04:42.093 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:42.093 "dma_device_type": 2 00:04:42.093 } 00:04:42.093 ], 00:04:42.093 "driver_specific": { 00:04:42.093 "passthru": { 00:04:42.093 "name": "Passthru0", 00:04:42.093 "base_bdev_name": "Malloc2" 00:04:42.093 } 00:04:42.093 } 00:04:42.093 } 00:04:42.093 ]' 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:42.093 00:04:42.093 real 0m0.261s 00:04:42.093 user 0m0.172s 00:04:42.093 sys 0m0.036s 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.093 11:50:29 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:42.093 ************************************ 00:04:42.093 END TEST rpc_daemon_integrity 00:04:42.093 ************************************ 00:04:42.354 11:50:29 rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:42.354 11:50:29 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:42.354 11:50:29 rpc -- rpc/rpc.sh@84 -- # killprocess 142677 00:04:42.354 11:50:29 rpc -- common/autotest_common.sh@948 -- # '[' -z 142677 ']' 00:04:42.354 11:50:29 rpc -- common/autotest_common.sh@952 -- # kill -0 142677 00:04:42.354 11:50:29 rpc -- common/autotest_common.sh@953 -- # uname 00:04:42.354 11:50:29 rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:42.354 11:50:29 rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 142677 00:04:42.354 11:50:29 rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:42.354 11:50:29 rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:42.354 11:50:29 rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 142677' 00:04:42.354 killing process with pid 142677 00:04:42.354 11:50:29 rpc -- common/autotest_common.sh@967 -- # kill 142677 00:04:42.354 11:50:29 rpc -- common/autotest_common.sh@972 -- # wait 142677 00:04:42.614 00:04:42.614 real 0m2.416s 00:04:42.614 user 0m3.172s 00:04:42.614 sys 0m0.614s 00:04:42.614 11:50:29 rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:42.614 11:50:29 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.614 ************************************ 00:04:42.614 END TEST rpc 00:04:42.614 ************************************ 00:04:42.614 11:50:29 -- common/autotest_common.sh@1142 -- # return 0 00:04:42.614 11:50:29 -- spdk/autotest.sh@170 -- # run_test skip_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:42.614 11:50:29 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.614 11:50:29 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.614 11:50:29 -- common/autotest_common.sh@10 -- # set +x 00:04:42.614 ************************************ 00:04:42.614 START TEST skip_rpc 00:04:42.614 ************************************ 00:04:42.614 11:50:29 skip_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/skip_rpc.sh 00:04:42.614 * Looking for test storage... 00:04:42.614 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc 00:04:42.614 11:50:29 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:42.614 11:50:29 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:42.614 11:50:29 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:42.614 11:50:29 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:42.614 11:50:29 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:42.614 11:50:29 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:42.874 ************************************ 00:04:42.874 START TEST skip_rpc 00:04:42.874 ************************************ 00:04:42.874 11:50:29 skip_rpc.skip_rpc -- common/autotest_common.sh@1123 -- # test_skip_rpc 00:04:42.874 11:50:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=143311 00:04:42.874 11:50:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:42.874 11:50:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:42.874 11:50:29 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:42.874 [2024-07-25 11:50:29.926076] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:04:42.874 [2024-07-25 11:50:29.926115] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid143311 ] 00:04:42.874 EAL: No free 2048 kB hugepages reported on node 1 00:04:42.874 [2024-07-25 11:50:29.980580] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:42.874 [2024-07-25 11:50:30.075050] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@648 -- # local es=0 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # rpc_cmd spdk_get_version 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@651 -- # es=1 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 143311 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@948 -- # '[' -z 143311 ']' 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@952 -- # kill -0 143311 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # uname 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 143311 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 143311' 00:04:48.157 killing process with pid 143311 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@967 -- # kill 143311 00:04:48.157 11:50:34 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # wait 143311 00:04:48.157 00:04:48.157 real 0m5.367s 00:04:48.157 user 0m5.134s 00:04:48.157 sys 0m0.263s 00:04:48.157 11:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:48.157 11:50:35 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.157 ************************************ 00:04:48.157 END TEST skip_rpc 00:04:48.157 ************************************ 00:04:48.157 11:50:35 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:48.157 11:50:35 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:48.157 11:50:35 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:48.157 11:50:35 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:48.157 11:50:35 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:48.157 ************************************ 00:04:48.157 START TEST skip_rpc_with_json 00:04:48.157 ************************************ 00:04:48.157 11:50:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_json 00:04:48.157 11:50:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:48.157 11:50:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=144263 00:04:48.157 11:50:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:48.157 11:50:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:48.157 11:50:35 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 144263 00:04:48.157 11:50:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@829 -- # '[' -z 144263 ']' 00:04:48.157 11:50:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:48.157 11:50:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:48.158 11:50:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:48.158 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:48.158 11:50:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:48.158 11:50:35 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.158 [2024-07-25 11:50:35.359687] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:04:48.158 [2024-07-25 11:50:35.359731] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid144263 ] 00:04:48.158 EAL: No free 2048 kB hugepages reported on node 1 00:04:48.418 [2024-07-25 11:50:35.414024] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:48.418 [2024-07-25 11:50:35.482135] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.989 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:48.989 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@862 -- # return 0 00:04:48.989 11:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:48.989 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.989 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.989 [2024-07-25 11:50:36.165139] nvmf_rpc.c:2569:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:48.989 request: 00:04:48.989 { 00:04:48.989 "trtype": "tcp", 00:04:48.989 "method": "nvmf_get_transports", 00:04:48.989 "req_id": 1 00:04:48.989 } 00:04:48.989 Got JSON-RPC error response 00:04:48.989 response: 00:04:48.989 { 00:04:48.989 "code": -19, 00:04:48.989 "message": "No such device" 00:04:48.989 } 00:04:48.989 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:04:48.989 11:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:48.989 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.989 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:48.989 [2024-07-25 11:50:36.177253] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:48.989 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:48.989 11:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:48.989 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@559 -- # xtrace_disable 00:04:48.989 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:49.250 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:04:49.250 11:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:49.250 { 00:04:49.250 "subsystems": [ 00:04:49.250 { 00:04:49.250 "subsystem": "vfio_user_target", 00:04:49.250 "config": null 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "subsystem": "keyring", 00:04:49.250 "config": [] 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "subsystem": "iobuf", 00:04:49.250 "config": [ 00:04:49.250 { 00:04:49.250 "method": "iobuf_set_options", 00:04:49.250 "params": { 00:04:49.250 "small_pool_count": 8192, 00:04:49.250 "large_pool_count": 1024, 00:04:49.250 "small_bufsize": 8192, 00:04:49.250 "large_bufsize": 135168 00:04:49.250 } 00:04:49.250 } 00:04:49.250 ] 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "subsystem": "sock", 00:04:49.250 "config": [ 00:04:49.250 { 00:04:49.250 "method": "sock_set_default_impl", 00:04:49.250 "params": { 00:04:49.250 "impl_name": "posix" 00:04:49.250 } 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "method": "sock_impl_set_options", 00:04:49.250 "params": { 00:04:49.250 "impl_name": "ssl", 00:04:49.250 "recv_buf_size": 4096, 00:04:49.250 "send_buf_size": 4096, 00:04:49.250 "enable_recv_pipe": true, 00:04:49.250 "enable_quickack": false, 00:04:49.250 "enable_placement_id": 0, 00:04:49.250 "enable_zerocopy_send_server": true, 00:04:49.250 "enable_zerocopy_send_client": false, 00:04:49.250 "zerocopy_threshold": 0, 00:04:49.250 "tls_version": 0, 00:04:49.250 "enable_ktls": false 00:04:49.250 } 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "method": "sock_impl_set_options", 00:04:49.250 "params": { 00:04:49.250 "impl_name": "posix", 00:04:49.250 "recv_buf_size": 2097152, 00:04:49.250 "send_buf_size": 2097152, 00:04:49.250 "enable_recv_pipe": true, 00:04:49.250 "enable_quickack": false, 00:04:49.250 "enable_placement_id": 0, 00:04:49.250 "enable_zerocopy_send_server": true, 00:04:49.250 "enable_zerocopy_send_client": false, 00:04:49.250 "zerocopy_threshold": 0, 00:04:49.250 "tls_version": 0, 00:04:49.250 "enable_ktls": false 00:04:49.250 } 00:04:49.250 } 00:04:49.250 ] 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "subsystem": "vmd", 00:04:49.250 "config": [] 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "subsystem": "accel", 00:04:49.250 "config": [ 00:04:49.250 { 00:04:49.250 "method": "accel_set_options", 00:04:49.250 "params": { 00:04:49.250 "small_cache_size": 128, 00:04:49.250 "large_cache_size": 16, 00:04:49.250 "task_count": 2048, 00:04:49.250 "sequence_count": 2048, 00:04:49.250 "buf_count": 2048 00:04:49.250 } 00:04:49.250 } 00:04:49.250 ] 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "subsystem": "bdev", 00:04:49.250 "config": [ 00:04:49.250 { 00:04:49.250 "method": "bdev_set_options", 00:04:49.250 "params": { 00:04:49.250 "bdev_io_pool_size": 65535, 00:04:49.250 "bdev_io_cache_size": 256, 00:04:49.250 "bdev_auto_examine": true, 00:04:49.250 "iobuf_small_cache_size": 128, 00:04:49.250 "iobuf_large_cache_size": 16 00:04:49.250 } 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "method": "bdev_raid_set_options", 00:04:49.250 "params": { 00:04:49.250 "process_window_size_kb": 1024, 00:04:49.250 "process_max_bandwidth_mb_sec": 0 00:04:49.250 } 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "method": "bdev_iscsi_set_options", 00:04:49.250 "params": { 00:04:49.250 "timeout_sec": 30 00:04:49.250 } 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "method": "bdev_nvme_set_options", 00:04:49.250 "params": { 00:04:49.250 "action_on_timeout": "none", 00:04:49.250 "timeout_us": 0, 00:04:49.250 "timeout_admin_us": 0, 00:04:49.250 "keep_alive_timeout_ms": 10000, 00:04:49.250 "arbitration_burst": 0, 00:04:49.250 "low_priority_weight": 0, 00:04:49.250 "medium_priority_weight": 0, 00:04:49.250 "high_priority_weight": 0, 00:04:49.250 "nvme_adminq_poll_period_us": 10000, 00:04:49.250 "nvme_ioq_poll_period_us": 0, 00:04:49.250 "io_queue_requests": 0, 00:04:49.250 "delay_cmd_submit": true, 00:04:49.250 "transport_retry_count": 4, 00:04:49.250 "bdev_retry_count": 3, 00:04:49.250 "transport_ack_timeout": 0, 00:04:49.250 "ctrlr_loss_timeout_sec": 0, 00:04:49.250 "reconnect_delay_sec": 0, 00:04:49.250 "fast_io_fail_timeout_sec": 0, 00:04:49.250 "disable_auto_failback": false, 00:04:49.250 "generate_uuids": false, 00:04:49.250 "transport_tos": 0, 00:04:49.250 "nvme_error_stat": false, 00:04:49.250 "rdma_srq_size": 0, 00:04:49.250 "io_path_stat": false, 00:04:49.250 "allow_accel_sequence": false, 00:04:49.250 "rdma_max_cq_size": 0, 00:04:49.250 "rdma_cm_event_timeout_ms": 0, 00:04:49.250 "dhchap_digests": [ 00:04:49.250 "sha256", 00:04:49.250 "sha384", 00:04:49.250 "sha512" 00:04:49.250 ], 00:04:49.250 "dhchap_dhgroups": [ 00:04:49.250 "null", 00:04:49.250 "ffdhe2048", 00:04:49.250 "ffdhe3072", 00:04:49.250 "ffdhe4096", 00:04:49.250 "ffdhe6144", 00:04:49.250 "ffdhe8192" 00:04:49.250 ] 00:04:49.250 } 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "method": "bdev_nvme_set_hotplug", 00:04:49.250 "params": { 00:04:49.250 "period_us": 100000, 00:04:49.250 "enable": false 00:04:49.250 } 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "method": "bdev_wait_for_examine" 00:04:49.250 } 00:04:49.250 ] 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "subsystem": "scsi", 00:04:49.250 "config": null 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "subsystem": "scheduler", 00:04:49.250 "config": [ 00:04:49.250 { 00:04:49.250 "method": "framework_set_scheduler", 00:04:49.250 "params": { 00:04:49.250 "name": "static" 00:04:49.250 } 00:04:49.250 } 00:04:49.250 ] 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "subsystem": "vhost_scsi", 00:04:49.250 "config": [] 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "subsystem": "vhost_blk", 00:04:49.250 "config": [] 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "subsystem": "ublk", 00:04:49.250 "config": [] 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "subsystem": "nbd", 00:04:49.250 "config": [] 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "subsystem": "nvmf", 00:04:49.250 "config": [ 00:04:49.250 { 00:04:49.250 "method": "nvmf_set_config", 00:04:49.250 "params": { 00:04:49.250 "discovery_filter": "match_any", 00:04:49.250 "admin_cmd_passthru": { 00:04:49.250 "identify_ctrlr": false 00:04:49.250 } 00:04:49.250 } 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "method": "nvmf_set_max_subsystems", 00:04:49.250 "params": { 00:04:49.250 "max_subsystems": 1024 00:04:49.250 } 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "method": "nvmf_set_crdt", 00:04:49.250 "params": { 00:04:49.250 "crdt1": 0, 00:04:49.250 "crdt2": 0, 00:04:49.250 "crdt3": 0 00:04:49.250 } 00:04:49.250 }, 00:04:49.250 { 00:04:49.250 "method": "nvmf_create_transport", 00:04:49.250 "params": { 00:04:49.250 "trtype": "TCP", 00:04:49.250 "max_queue_depth": 128, 00:04:49.250 "max_io_qpairs_per_ctrlr": 127, 00:04:49.250 "in_capsule_data_size": 4096, 00:04:49.250 "max_io_size": 131072, 00:04:49.251 "io_unit_size": 131072, 00:04:49.251 "max_aq_depth": 128, 00:04:49.251 "num_shared_buffers": 511, 00:04:49.251 "buf_cache_size": 4294967295, 00:04:49.251 "dif_insert_or_strip": false, 00:04:49.251 "zcopy": false, 00:04:49.251 "c2h_success": true, 00:04:49.251 "sock_priority": 0, 00:04:49.251 "abort_timeout_sec": 1, 00:04:49.251 "ack_timeout": 0, 00:04:49.251 "data_wr_pool_size": 0 00:04:49.251 } 00:04:49.251 } 00:04:49.251 ] 00:04:49.251 }, 00:04:49.251 { 00:04:49.251 "subsystem": "iscsi", 00:04:49.251 "config": [ 00:04:49.251 { 00:04:49.251 "method": "iscsi_set_options", 00:04:49.251 "params": { 00:04:49.251 "node_base": "iqn.2016-06.io.spdk", 00:04:49.251 "max_sessions": 128, 00:04:49.251 "max_connections_per_session": 2, 00:04:49.251 "max_queue_depth": 64, 00:04:49.251 "default_time2wait": 2, 00:04:49.251 "default_time2retain": 20, 00:04:49.251 "first_burst_length": 8192, 00:04:49.251 "immediate_data": true, 00:04:49.251 "allow_duplicated_isid": false, 00:04:49.251 "error_recovery_level": 0, 00:04:49.251 "nop_timeout": 60, 00:04:49.251 "nop_in_interval": 30, 00:04:49.251 "disable_chap": false, 00:04:49.251 "require_chap": false, 00:04:49.251 "mutual_chap": false, 00:04:49.251 "chap_group": 0, 00:04:49.251 "max_large_datain_per_connection": 64, 00:04:49.251 "max_r2t_per_connection": 4, 00:04:49.251 "pdu_pool_size": 36864, 00:04:49.251 "immediate_data_pool_size": 16384, 00:04:49.251 "data_out_pool_size": 2048 00:04:49.251 } 00:04:49.251 } 00:04:49.251 ] 00:04:49.251 } 00:04:49.251 ] 00:04:49.251 } 00:04:49.251 11:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:49.251 11:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 144263 00:04:49.251 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 144263 ']' 00:04:49.251 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 144263 00:04:49.251 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:49.251 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:49.251 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 144263 00:04:49.251 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:49.251 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:49.251 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 144263' 00:04:49.251 killing process with pid 144263 00:04:49.251 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 144263 00:04:49.251 11:50:36 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 144263 00:04:49.511 11:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=144499 00:04:49.511 11:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:49.511 11:50:36 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:54.805 11:50:41 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 144499 00:04:54.805 11:50:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@948 -- # '[' -z 144499 ']' 00:04:54.805 11:50:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@952 -- # kill -0 144499 00:04:54.805 11:50:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # uname 00:04:54.805 11:50:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:54.805 11:50:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 144499 00:04:54.805 11:50:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:54.805 11:50:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:54.805 11:50:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@966 -- # echo 'killing process with pid 144499' 00:04:54.805 killing process with pid 144499 00:04:54.805 11:50:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@967 -- # kill 144499 00:04:54.805 11:50:41 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # wait 144499 00:04:54.805 11:50:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:54.805 11:50:42 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/log.txt 00:04:55.065 00:04:55.065 real 0m6.747s 00:04:55.065 user 0m6.584s 00:04:55.065 sys 0m0.579s 00:04:55.065 11:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.065 11:50:42 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:55.065 ************************************ 00:04:55.065 END TEST skip_rpc_with_json 00:04:55.065 ************************************ 00:04:55.065 11:50:42 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:55.066 11:50:42 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:55.066 11:50:42 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.066 11:50:42 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.066 11:50:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.066 ************************************ 00:04:55.066 START TEST skip_rpc_with_delay 00:04:55.066 ************************************ 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1123 -- # test_skip_rpc_with_delay 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@648 -- # local es=0 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:55.066 [2024-07-25 11:50:42.172211] app.c: 832:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:55.066 [2024-07-25 11:50:42.172274] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 0, errno: 2 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@651 -- # es=1 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:55.066 00:04:55.066 real 0m0.062s 00:04:55.066 user 0m0.043s 00:04:55.066 sys 0m0.019s 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:55.066 11:50:42 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:55.066 ************************************ 00:04:55.066 END TEST skip_rpc_with_delay 00:04:55.066 ************************************ 00:04:55.066 11:50:42 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:55.066 11:50:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:55.066 11:50:42 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:55.066 11:50:42 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:55.066 11:50:42 skip_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:55.066 11:50:42 skip_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:55.066 11:50:42 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:55.066 ************************************ 00:04:55.066 START TEST exit_on_failed_rpc_init 00:04:55.066 ************************************ 00:04:55.066 11:50:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1123 -- # test_exit_on_failed_rpc_init 00:04:55.066 11:50:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=145476 00:04:55.066 11:50:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 145476 00:04:55.066 11:50:42 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:04:55.066 11:50:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@829 -- # '[' -z 145476 ']' 00:04:55.066 11:50:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:55.066 11:50:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:55.066 11:50:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:55.066 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:55.066 11:50:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:55.066 11:50:42 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:55.066 [2024-07-25 11:50:42.299169] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:04:55.066 [2024-07-25 11:50:42.299211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145476 ] 00:04:55.326 EAL: No free 2048 kB hugepages reported on node 1 00:04:55.326 [2024-07-25 11:50:42.352095] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:55.326 [2024-07-25 11:50:42.431970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:55.895 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:55.895 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@862 -- # return 0 00:04:55.895 11:50:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:55.895 11:50:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.895 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@648 -- # local es=0 00:04:55.895 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:55.895 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.895 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.895 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.895 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.895 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.895 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:04:55.895 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:04:55.895 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt ]] 00:04:55.895 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x2 00:04:56.154 [2024-07-25 11:50:43.166654] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:04:56.154 [2024-07-25 11:50:43.166701] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145703 ] 00:04:56.154 EAL: No free 2048 kB hugepages reported on node 1 00:04:56.154 [2024-07-25 11:50:43.218627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:56.154 [2024-07-25 11:50:43.291712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.154 [2024-07-25 11:50:43.291776] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:56.154 [2024-07-25 11:50:43.291785] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:56.154 [2024-07-25 11:50:43.291791] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:56.154 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@651 -- # es=234 00:04:56.154 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:04:56.154 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@660 -- # es=106 00:04:56.154 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@661 -- # case "$es" in 00:04:56.154 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@668 -- # es=1 00:04:56.154 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:04:56.154 11:50:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:56.154 11:50:43 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 145476 00:04:56.154 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@948 -- # '[' -z 145476 ']' 00:04:56.154 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@952 -- # kill -0 145476 00:04:56.154 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # uname 00:04:56.154 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:04:56.154 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 145476 00:04:56.413 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:04:56.413 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:04:56.413 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 145476' 00:04:56.413 killing process with pid 145476 00:04:56.413 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@967 -- # kill 145476 00:04:56.413 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # wait 145476 00:04:56.673 00:04:56.673 real 0m1.468s 00:04:56.673 user 0m1.700s 00:04:56.673 sys 0m0.404s 00:04:56.673 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.673 11:50:43 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:56.673 ************************************ 00:04:56.673 END TEST exit_on_failed_rpc_init 00:04:56.673 ************************************ 00:04:56.673 11:50:43 skip_rpc -- common/autotest_common.sh@1142 -- # return 0 00:04:56.673 11:50:43 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc/config.json 00:04:56.673 00:04:56.673 real 0m13.995s 00:04:56.673 user 0m13.593s 00:04:56.673 sys 0m1.508s 00:04:56.673 11:50:43 skip_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.673 11:50:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:56.673 ************************************ 00:04:56.673 END TEST skip_rpc 00:04:56.673 ************************************ 00:04:56.673 11:50:43 -- common/autotest_common.sh@1142 -- # return 0 00:04:56.673 11:50:43 -- spdk/autotest.sh@171 -- # run_test rpc_client /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:56.673 11:50:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.673 11:50:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.673 11:50:43 -- common/autotest_common.sh@10 -- # set +x 00:04:56.673 ************************************ 00:04:56.673 START TEST rpc_client 00:04:56.673 ************************************ 00:04:56.673 11:50:43 rpc_client -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client.sh 00:04:56.673 * Looking for test storage... 00:04:56.673 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client 00:04:56.673 11:50:43 rpc_client -- rpc_client/rpc_client.sh@10 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_client/rpc_client_test 00:04:56.673 OK 00:04:56.673 11:50:43 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:56.673 00:04:56.673 real 0m0.104s 00:04:56.673 user 0m0.052s 00:04:56.673 sys 0m0.060s 00:04:56.673 11:50:43 rpc_client -- common/autotest_common.sh@1124 -- # xtrace_disable 00:04:56.673 11:50:43 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:56.673 ************************************ 00:04:56.673 END TEST rpc_client 00:04:56.673 ************************************ 00:04:56.933 11:50:43 -- common/autotest_common.sh@1142 -- # return 0 00:04:56.933 11:50:43 -- spdk/autotest.sh@172 -- # run_test json_config /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:56.933 11:50:43 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:04:56.933 11:50:43 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:04:56.933 11:50:43 -- common/autotest_common.sh@10 -- # set +x 00:04:56.933 ************************************ 00:04:56.933 START TEST json_config 00:04:56.933 ************************************ 00:04:56.933 11:50:43 json_config -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config.sh 00:04:56.933 11:50:44 json_config -- json_config/json_config.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:04:56.933 11:50:44 json_config -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:56.933 11:50:44 json_config -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:56.933 11:50:44 json_config -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:56.933 11:50:44 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.933 11:50:44 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.933 11:50:44 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.933 11:50:44 json_config -- paths/export.sh@5 -- # export PATH 00:04:56.933 11:50:44 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@47 -- # : 0 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:04:56.933 11:50:44 json_config -- nvmf/common.sh@51 -- # have_pci_nics=0 00:04:56.933 11:50:44 json_config -- json_config/json_config.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:04:56.933 11:50:44 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:56.933 11:50:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:56.933 11:50:44 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:56.933 11:50:44 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:56.933 11:50:44 json_config -- json_config/json_config.sh@31 -- # app_pid=(['target']='' ['initiator']='') 00:04:56.933 11:50:44 json_config -- json_config/json_config.sh@31 -- # declare -A app_pid 00:04:56.933 11:50:44 json_config -- json_config/json_config.sh@32 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock' ['initiator']='/var/tmp/spdk_initiator.sock') 00:04:56.933 11:50:44 json_config -- json_config/json_config.sh@32 -- # declare -A app_socket 00:04:56.933 11:50:44 json_config -- json_config/json_config.sh@33 -- # app_params=(['target']='-m 0x1 -s 1024' ['initiator']='-m 0x2 -g -u -s 1024') 00:04:56.933 11:50:44 json_config -- json_config/json_config.sh@33 -- # declare -A app_params 00:04:56.934 11:50:44 json_config -- json_config/json_config.sh@34 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json' ['initiator']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json') 00:04:56.934 11:50:44 json_config -- json_config/json_config.sh@34 -- # declare -A configs_path 00:04:56.934 11:50:44 json_config -- json_config/json_config.sh@40 -- # last_event_id=0 00:04:56.934 11:50:44 json_config -- json_config/json_config.sh@359 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:56.934 11:50:44 json_config -- json_config/json_config.sh@360 -- # echo 'INFO: JSON configuration test init' 00:04:56.934 INFO: JSON configuration test init 00:04:56.934 11:50:44 json_config -- json_config/json_config.sh@361 -- # json_config_test_init 00:04:56.934 11:50:44 json_config -- json_config/json_config.sh@266 -- # timing_enter json_config_test_init 00:04:56.934 11:50:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:56.934 11:50:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.934 11:50:44 json_config -- json_config/json_config.sh@267 -- # timing_enter json_config_setup_target 00:04:56.934 11:50:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:56.934 11:50:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.934 11:50:44 json_config -- json_config/json_config.sh@269 -- # json_config_test_start_app target --wait-for-rpc 00:04:56.934 11:50:44 json_config -- json_config/common.sh@9 -- # local app=target 00:04:56.934 11:50:44 json_config -- json_config/common.sh@10 -- # shift 00:04:56.934 11:50:44 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:56.934 11:50:44 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:56.934 11:50:44 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:04:56.934 11:50:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.934 11:50:44 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:56.934 11:50:44 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=145829 00:04:56.934 11:50:44 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:56.934 Waiting for target to run... 00:04:56.934 11:50:44 json_config -- json_config/common.sh@25 -- # waitforlisten 145829 /var/tmp/spdk_tgt.sock 00:04:56.934 11:50:44 json_config -- common/autotest_common.sh@829 -- # '[' -z 145829 ']' 00:04:56.934 11:50:44 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:56.934 11:50:44 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:04:56.934 11:50:44 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --wait-for-rpc 00:04:56.934 11:50:44 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:56.934 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:56.934 11:50:44 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:04:56.934 11:50:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:56.934 [2024-07-25 11:50:44.127258] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:04:56.934 [2024-07-25 11:50:44.127304] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid145829 ] 00:04:56.934 EAL: No free 2048 kB hugepages reported on node 1 00:04:57.193 [2024-07-25 11:50:44.393638] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:57.452 [2024-07-25 11:50:44.464857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.711 11:50:44 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:04:57.711 11:50:44 json_config -- common/autotest_common.sh@862 -- # return 0 00:04:57.711 11:50:44 json_config -- json_config/common.sh@26 -- # echo '' 00:04:57.711 00:04:57.711 11:50:44 json_config -- json_config/json_config.sh@273 -- # create_accel_config 00:04:57.711 11:50:44 json_config -- json_config/json_config.sh@97 -- # timing_enter create_accel_config 00:04:57.711 11:50:44 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:04:57.711 11:50:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.711 11:50:44 json_config -- json_config/json_config.sh@99 -- # [[ 0 -eq 1 ]] 00:04:57.711 11:50:44 json_config -- json_config/json_config.sh@105 -- # timing_exit create_accel_config 00:04:57.711 11:50:44 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:04:57.711 11:50:44 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:57.970 11:50:44 json_config -- json_config/json_config.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh --json-with-subsystems 00:04:57.970 11:50:44 json_config -- json_config/json_config.sh@278 -- # tgt_rpc load_config 00:04:57.970 11:50:44 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock load_config 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@280 -- # tgt_check_notification_types 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@43 -- # timing_enter tgt_check_notification_types 00:05:01.259 11:50:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:01.259 11:50:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@45 -- # local ret=0 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@46 -- # enabled_types=('bdev_register' 'bdev_unregister') 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@46 -- # local enabled_types 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@48 -- # tgt_rpc notify_get_types 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@48 -- # jq -r '.[]' 00:05:01.259 11:50:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock notify_get_types 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@48 -- # get_types=('bdev_register' 'bdev_unregister') 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@48 -- # local get_types 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@50 -- # local type_diff 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@51 -- # echo bdev_register bdev_unregister bdev_register bdev_unregister 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@51 -- # tr ' ' '\n' 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@51 -- # sort 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@51 -- # uniq -u 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@51 -- # type_diff= 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@53 -- # [[ -n '' ]] 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@58 -- # timing_exit tgt_check_notification_types 00:05:01.259 11:50:48 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:01.259 11:50:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@59 -- # return 0 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@282 -- # [[ 0 -eq 1 ]] 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@286 -- # [[ 0 -eq 1 ]] 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@290 -- # [[ 0 -eq 1 ]] 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@294 -- # [[ 1 -eq 1 ]] 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@295 -- # create_nvmf_subsystem_config 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@234 -- # timing_enter create_nvmf_subsystem_config 00:05:01.259 11:50:48 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:01.259 11:50:48 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@236 -- # NVMF_FIRST_TARGET_IP=127.0.0.1 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@237 -- # [[ tcp == \r\d\m\a ]] 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@241 -- # [[ -z 127.0.0.1 ]] 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@246 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:01.259 11:50:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocForNvmf0 00:05:01.259 MallocForNvmf0 00:05:01.259 11:50:48 json_config -- json_config/json_config.sh@247 -- # tgt_rpc bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:01.259 11:50:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 4 1024 --name MallocForNvmf1 00:05:01.518 MallocForNvmf1 00:05:01.518 11:50:48 json_config -- json_config/json_config.sh@249 -- # tgt_rpc nvmf_create_transport -t tcp -u 8192 -c 0 00:05:01.518 11:50:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_transport -t tcp -u 8192 -c 0 00:05:01.776 [2024-07-25 11:50:48.776689] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:01.776 11:50:48 json_config -- json_config/json_config.sh@250 -- # tgt_rpc nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:01.776 11:50:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:05:01.776 11:50:48 json_config -- json_config/json_config.sh@251 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:01.776 11:50:48 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf0 00:05:02.035 11:50:49 json_config -- json_config/json_config.sh@252 -- # tgt_rpc nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:02.035 11:50:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 MallocForNvmf1 00:05:02.294 11:50:49 json_config -- json_config/json_config.sh@253 -- # tgt_rpc nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:02.294 11:50:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 127.0.0.1 -s 4420 00:05:02.294 [2024-07-25 11:50:49.450824] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:02.294 11:50:49 json_config -- json_config/json_config.sh@255 -- # timing_exit create_nvmf_subsystem_config 00:05:02.294 11:50:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.294 11:50:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.294 11:50:49 json_config -- json_config/json_config.sh@297 -- # timing_exit json_config_setup_target 00:05:02.294 11:50:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.294 11:50:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.294 11:50:49 json_config -- json_config/json_config.sh@299 -- # [[ 0 -eq 1 ]] 00:05:02.294 11:50:49 json_config -- json_config/json_config.sh@304 -- # tgt_rpc bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:02.294 11:50:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_create 8 512 --name MallocBdevForConfigChangeCheck 00:05:02.552 MallocBdevForConfigChangeCheck 00:05:02.552 11:50:49 json_config -- json_config/json_config.sh@306 -- # timing_exit json_config_test_init 00:05:02.552 11:50:49 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:02.552 11:50:49 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:02.552 11:50:49 json_config -- json_config/json_config.sh@363 -- # tgt_rpc save_config 00:05:02.552 11:50:49 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:02.811 11:50:50 json_config -- json_config/json_config.sh@365 -- # echo 'INFO: shutting down applications...' 00:05:02.811 INFO: shutting down applications... 00:05:02.811 11:50:50 json_config -- json_config/json_config.sh@366 -- # [[ 0 -eq 1 ]] 00:05:02.811 11:50:50 json_config -- json_config/json_config.sh@372 -- # json_config_clear target 00:05:02.811 11:50:50 json_config -- json_config/json_config.sh@336 -- # [[ -n 22 ]] 00:05:02.811 11:50:50 json_config -- json_config/json_config.sh@337 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py -s /var/tmp/spdk_tgt.sock clear_config 00:05:04.718 Calling clear_iscsi_subsystem 00:05:04.718 Calling clear_nvmf_subsystem 00:05:04.718 Calling clear_nbd_subsystem 00:05:04.718 Calling clear_ublk_subsystem 00:05:04.718 Calling clear_vhost_blk_subsystem 00:05:04.718 Calling clear_vhost_scsi_subsystem 00:05:04.718 Calling clear_bdev_subsystem 00:05:04.718 11:50:51 json_config -- json_config/json_config.sh@341 -- # local config_filter=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py 00:05:04.718 11:50:51 json_config -- json_config/json_config.sh@347 -- # count=100 00:05:04.718 11:50:51 json_config -- json_config/json_config.sh@348 -- # '[' 100 -gt 0 ']' 00:05:04.718 11:50:51 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:04.718 11:50:51 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method delete_global_parameters 00:05:04.718 11:50:51 json_config -- json_config/json_config.sh@349 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method check_empty 00:05:04.718 11:50:51 json_config -- json_config/json_config.sh@349 -- # break 00:05:04.718 11:50:51 json_config -- json_config/json_config.sh@354 -- # '[' 100 -eq 0 ']' 00:05:04.718 11:50:51 json_config -- json_config/json_config.sh@373 -- # json_config_test_shutdown_app target 00:05:04.718 11:50:51 json_config -- json_config/common.sh@31 -- # local app=target 00:05:04.718 11:50:51 json_config -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:04.718 11:50:51 json_config -- json_config/common.sh@35 -- # [[ -n 145829 ]] 00:05:04.718 11:50:51 json_config -- json_config/common.sh@38 -- # kill -SIGINT 145829 00:05:04.718 11:50:51 json_config -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:04.718 11:50:51 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:04.718 11:50:51 json_config -- json_config/common.sh@41 -- # kill -0 145829 00:05:04.718 11:50:51 json_config -- json_config/common.sh@45 -- # sleep 0.5 00:05:05.287 11:50:52 json_config -- json_config/common.sh@40 -- # (( i++ )) 00:05:05.287 11:50:52 json_config -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:05.287 11:50:52 json_config -- json_config/common.sh@41 -- # kill -0 145829 00:05:05.287 11:50:52 json_config -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:05.287 11:50:52 json_config -- json_config/common.sh@43 -- # break 00:05:05.287 11:50:52 json_config -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:05.287 11:50:52 json_config -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:05.287 SPDK target shutdown done 00:05:05.287 11:50:52 json_config -- json_config/json_config.sh@375 -- # echo 'INFO: relaunching applications...' 00:05:05.287 INFO: relaunching applications... 00:05:05.287 11:50:52 json_config -- json_config/json_config.sh@376 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.287 11:50:52 json_config -- json_config/common.sh@9 -- # local app=target 00:05:05.287 11:50:52 json_config -- json_config/common.sh@10 -- # shift 00:05:05.287 11:50:52 json_config -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:05.287 11:50:52 json_config -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:05.287 11:50:52 json_config -- json_config/common.sh@15 -- # local app_extra_params= 00:05:05.287 11:50:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.287 11:50:52 json_config -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:05.287 11:50:52 json_config -- json_config/common.sh@22 -- # app_pid["$app"]=147361 00:05:05.287 11:50:52 json_config -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:05.287 Waiting for target to run... 00:05:05.287 11:50:52 json_config -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:05.287 11:50:52 json_config -- json_config/common.sh@25 -- # waitforlisten 147361 /var/tmp/spdk_tgt.sock 00:05:05.287 11:50:52 json_config -- common/autotest_common.sh@829 -- # '[' -z 147361 ']' 00:05:05.287 11:50:52 json_config -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:05.287 11:50:52 json_config -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:05.287 11:50:52 json_config -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:05.287 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:05.287 11:50:52 json_config -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:05.287 11:50:52 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:05.287 [2024-07-25 11:50:52.455280] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:05.288 [2024-07-25 11:50:52.455342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid147361 ] 00:05:05.288 EAL: No free 2048 kB hugepages reported on node 1 00:05:05.857 [2024-07-25 11:50:52.896082] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:05.857 [2024-07-25 11:50:52.988263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:09.146 [2024-07-25 11:50:55.998913] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:05:09.146 [2024-07-25 11:50:56.031233] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:05:09.406 11:50:56 json_config -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:09.406 11:50:56 json_config -- common/autotest_common.sh@862 -- # return 0 00:05:09.406 11:50:56 json_config -- json_config/common.sh@26 -- # echo '' 00:05:09.406 00:05:09.406 11:50:56 json_config -- json_config/json_config.sh@377 -- # [[ 0 -eq 1 ]] 00:05:09.406 11:50:56 json_config -- json_config/json_config.sh@381 -- # echo 'INFO: Checking if target configuration is the same...' 00:05:09.406 INFO: Checking if target configuration is the same... 00:05:09.406 11:50:56 json_config -- json_config/json_config.sh@382 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.406 11:50:56 json_config -- json_config/json_config.sh@382 -- # tgt_rpc save_config 00:05:09.406 11:50:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.406 + '[' 2 -ne 2 ']' 00:05:09.406 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:09.406 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:09.406 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:09.406 +++ basename /dev/fd/62 00:05:09.406 ++ mktemp /tmp/62.XXX 00:05:09.406 + tmp_file_1=/tmp/62.tKn 00:05:09.406 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.406 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:09.406 + tmp_file_2=/tmp/spdk_tgt_config.json.JmY 00:05:09.406 + ret=0 00:05:09.406 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:09.974 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:09.974 + diff -u /tmp/62.tKn /tmp/spdk_tgt_config.json.JmY 00:05:09.974 + echo 'INFO: JSON config files are the same' 00:05:09.974 INFO: JSON config files are the same 00:05:09.974 + rm /tmp/62.tKn /tmp/spdk_tgt_config.json.JmY 00:05:09.974 + exit 0 00:05:09.974 11:50:56 json_config -- json_config/json_config.sh@383 -- # [[ 0 -eq 1 ]] 00:05:09.974 11:50:56 json_config -- json_config/json_config.sh@388 -- # echo 'INFO: changing configuration and checking if this can be detected...' 00:05:09.974 INFO: changing configuration and checking if this can be detected... 00:05:09.974 11:50:56 json_config -- json_config/json_config.sh@390 -- # tgt_rpc bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:09.974 11:50:56 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock bdev_malloc_delete MallocBdevForConfigChangeCheck 00:05:09.974 11:50:57 json_config -- json_config/json_config.sh@391 -- # tgt_rpc save_config 00:05:09.974 11:50:57 json_config -- json_config/common.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk_tgt.sock save_config 00:05:09.974 11:50:57 json_config -- json_config/json_config.sh@391 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh /dev/fd/62 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.974 + '[' 2 -ne 2 ']' 00:05:09.974 +++ dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_diff.sh 00:05:09.974 ++ readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/../.. 00:05:09.974 + rootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:05:09.974 +++ basename /dev/fd/62 00:05:09.974 ++ mktemp /tmp/62.XXX 00:05:09.974 + tmp_file_1=/tmp/62.V0c 00:05:09.974 +++ basename /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:09.974 ++ mktemp /tmp/spdk_tgt_config.json.XXX 00:05:09.974 + tmp_file_2=/tmp/spdk_tgt_config.json.gYq 00:05:09.974 + ret=0 00:05:09.974 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:10.233 + /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/config_filter.py -method sort 00:05:10.493 + diff -u /tmp/62.V0c /tmp/spdk_tgt_config.json.gYq 00:05:10.493 + ret=1 00:05:10.493 + echo '=== Start of file: /tmp/62.V0c ===' 00:05:10.493 + cat /tmp/62.V0c 00:05:10.493 + echo '=== End of file: /tmp/62.V0c ===' 00:05:10.493 + echo '' 00:05:10.493 + echo '=== Start of file: /tmp/spdk_tgt_config.json.gYq ===' 00:05:10.493 + cat /tmp/spdk_tgt_config.json.gYq 00:05:10.493 + echo '=== End of file: /tmp/spdk_tgt_config.json.gYq ===' 00:05:10.493 + echo '' 00:05:10.493 + rm /tmp/62.V0c /tmp/spdk_tgt_config.json.gYq 00:05:10.493 + exit 1 00:05:10.493 11:50:57 json_config -- json_config/json_config.sh@395 -- # echo 'INFO: configuration change detected.' 00:05:10.493 INFO: configuration change detected. 00:05:10.493 11:50:57 json_config -- json_config/json_config.sh@398 -- # json_config_test_fini 00:05:10.493 11:50:57 json_config -- json_config/json_config.sh@310 -- # timing_enter json_config_test_fini 00:05:10.493 11:50:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:10.493 11:50:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.493 11:50:57 json_config -- json_config/json_config.sh@311 -- # local ret=0 00:05:10.493 11:50:57 json_config -- json_config/json_config.sh@313 -- # [[ -n '' ]] 00:05:10.493 11:50:57 json_config -- json_config/json_config.sh@321 -- # [[ -n 147361 ]] 00:05:10.493 11:50:57 json_config -- json_config/json_config.sh@324 -- # cleanup_bdev_subsystem_config 00:05:10.493 11:50:57 json_config -- json_config/json_config.sh@188 -- # timing_enter cleanup_bdev_subsystem_config 00:05:10.493 11:50:57 json_config -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:10.493 11:50:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.493 11:50:57 json_config -- json_config/json_config.sh@190 -- # [[ 0 -eq 1 ]] 00:05:10.493 11:50:57 json_config -- json_config/json_config.sh@197 -- # uname -s 00:05:10.493 11:50:57 json_config -- json_config/json_config.sh@197 -- # [[ Linux = Linux ]] 00:05:10.493 11:50:57 json_config -- json_config/json_config.sh@198 -- # rm -f /sample_aio 00:05:10.493 11:50:57 json_config -- json_config/json_config.sh@201 -- # [[ 0 -eq 1 ]] 00:05:10.493 11:50:57 json_config -- json_config/json_config.sh@205 -- # timing_exit cleanup_bdev_subsystem_config 00:05:10.493 11:50:57 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:10.493 11:50:57 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:10.493 11:50:57 json_config -- json_config/json_config.sh@327 -- # killprocess 147361 00:05:10.493 11:50:57 json_config -- common/autotest_common.sh@948 -- # '[' -z 147361 ']' 00:05:10.493 11:50:57 json_config -- common/autotest_common.sh@952 -- # kill -0 147361 00:05:10.493 11:50:57 json_config -- common/autotest_common.sh@953 -- # uname 00:05:10.493 11:50:57 json_config -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:10.493 11:50:57 json_config -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 147361 00:05:10.493 11:50:57 json_config -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:10.493 11:50:57 json_config -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:10.493 11:50:57 json_config -- common/autotest_common.sh@966 -- # echo 'killing process with pid 147361' 00:05:10.493 killing process with pid 147361 00:05:10.493 11:50:57 json_config -- common/autotest_common.sh@967 -- # kill 147361 00:05:10.493 11:50:57 json_config -- common/autotest_common.sh@972 -- # wait 147361 00:05:11.872 11:50:59 json_config -- json_config/json_config.sh@330 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_initiator_config.json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/spdk_tgt_config.json 00:05:11.872 11:50:59 json_config -- json_config/json_config.sh@331 -- # timing_exit json_config_test_fini 00:05:11.872 11:50:59 json_config -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:11.872 11:50:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.132 11:50:59 json_config -- json_config/json_config.sh@332 -- # return 0 00:05:12.132 11:50:59 json_config -- json_config/json_config.sh@400 -- # echo 'INFO: Success' 00:05:12.132 INFO: Success 00:05:12.132 00:05:12.132 real 0m15.169s 00:05:12.132 user 0m15.944s 00:05:12.132 sys 0m1.836s 00:05:12.132 11:50:59 json_config -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:12.132 11:50:59 json_config -- common/autotest_common.sh@10 -- # set +x 00:05:12.132 ************************************ 00:05:12.132 END TEST json_config 00:05:12.132 ************************************ 00:05:12.132 11:50:59 -- common/autotest_common.sh@1142 -- # return 0 00:05:12.132 11:50:59 -- spdk/autotest.sh@173 -- # run_test json_config_extra_key /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:12.132 11:50:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:12.132 11:50:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:12.132 11:50:59 -- common/autotest_common.sh@10 -- # set +x 00:05:12.132 ************************************ 00:05:12.132 START TEST json_config_extra_key 00:05:12.133 ************************************ 00:05:12.133 11:50:59 json_config_extra_key -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/json_config_extra_key.sh 00:05:12.133 11:50:59 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:05:12.133 11:50:59 json_config_extra_key -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:05:12.133 11:50:59 json_config_extra_key -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:05:12.133 11:50:59 json_config_extra_key -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:05:12.133 11:50:59 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.133 11:50:59 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.133 11:50:59 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.133 11:50:59 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:05:12.133 11:50:59 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@47 -- # : 0 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:05:12.133 11:50:59 json_config_extra_key -- nvmf/common.sh@51 -- # have_pci_nics=0 00:05:12.133 11:50:59 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/common.sh 00:05:12.133 11:50:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:05:12.133 11:50:59 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:05:12.133 11:50:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:05:12.133 11:50:59 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:05:12.133 11:50:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:05:12.133 11:50:59 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:05:12.133 11:50:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json') 00:05:12.133 11:50:59 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:05:12.133 11:50:59 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:05:12.133 11:50:59 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:05:12.133 INFO: launching applications... 00:05:12.133 11:50:59 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:12.133 11:50:59 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:05:12.133 11:50:59 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:05:12.133 11:50:59 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:05:12.133 11:50:59 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:05:12.133 11:50:59 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:05:12.133 11:50:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.133 11:50:59 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:05:12.133 11:50:59 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=148694 00:05:12.133 11:50:59 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:05:12.133 Waiting for target to run... 00:05:12.133 11:50:59 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 148694 /var/tmp/spdk_tgt.sock 00:05:12.133 11:50:59 json_config_extra_key -- common/autotest_common.sh@829 -- # '[' -z 148694 ']' 00:05:12.133 11:50:59 json_config_extra_key -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:05:12.133 11:50:59 json_config_extra_key -- json_config/common.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/extra_key.json 00:05:12.133 11:50:59 json_config_extra_key -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:12.133 11:50:59 json_config_extra_key -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:05:12.133 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:05:12.133 11:50:59 json_config_extra_key -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:12.133 11:50:59 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:12.133 [2024-07-25 11:50:59.366041] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:12.133 [2024-07-25 11:50:59.366100] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid148694 ] 00:05:12.393 EAL: No free 2048 kB hugepages reported on node 1 00:05:12.393 [2024-07-25 11:50:59.639743] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:12.652 [2024-07-25 11:50:59.709134] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:13.221 11:51:00 json_config_extra_key -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:13.221 11:51:00 json_config_extra_key -- common/autotest_common.sh@862 -- # return 0 00:05:13.221 11:51:00 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:05:13.221 00:05:13.221 11:51:00 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:05:13.221 INFO: shutting down applications... 00:05:13.221 11:51:00 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:05:13.221 11:51:00 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:05:13.221 11:51:00 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:05:13.221 11:51:00 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 148694 ]] 00:05:13.221 11:51:00 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 148694 00:05:13.221 11:51:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:05:13.221 11:51:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.221 11:51:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 148694 00:05:13.221 11:51:00 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:05:13.480 11:51:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:05:13.480 11:51:00 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:05:13.480 11:51:00 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 148694 00:05:13.480 11:51:00 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:05:13.480 11:51:00 json_config_extra_key -- json_config/common.sh@43 -- # break 00:05:13.480 11:51:00 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:05:13.480 11:51:00 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:05:13.480 SPDK target shutdown done 00:05:13.480 11:51:00 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:05:13.480 Success 00:05:13.480 00:05:13.480 real 0m1.462s 00:05:13.480 user 0m1.266s 00:05:13.480 sys 0m0.358s 00:05:13.480 11:51:00 json_config_extra_key -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:13.480 11:51:00 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:05:13.480 ************************************ 00:05:13.480 END TEST json_config_extra_key 00:05:13.480 ************************************ 00:05:13.480 11:51:00 -- common/autotest_common.sh@1142 -- # return 0 00:05:13.480 11:51:00 -- spdk/autotest.sh@174 -- # run_test alias_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:13.480 11:51:00 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:13.480 11:51:00 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:13.480 11:51:00 -- common/autotest_common.sh@10 -- # set +x 00:05:13.739 ************************************ 00:05:13.739 START TEST alias_rpc 00:05:13.739 ************************************ 00:05:13.739 11:51:00 alias_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:05:13.739 * Looking for test storage... 00:05:13.739 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/alias_rpc 00:05:13.739 11:51:00 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:05:13.739 11:51:00 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:13.739 11:51:00 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=149111 00:05:13.739 11:51:00 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 149111 00:05:13.739 11:51:00 alias_rpc -- common/autotest_common.sh@829 -- # '[' -z 149111 ']' 00:05:13.739 11:51:00 alias_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:13.739 11:51:00 alias_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:13.739 11:51:00 alias_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:13.739 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:13.739 11:51:00 alias_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:13.739 11:51:00 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:13.739 [2024-07-25 11:51:00.868253] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:13.739 [2024-07-25 11:51:00.868299] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149111 ] 00:05:13.739 EAL: No free 2048 kB hugepages reported on node 1 00:05:13.739 [2024-07-25 11:51:00.922421] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:13.998 [2024-07-25 11:51:01.004427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:14.566 11:51:01 alias_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:14.566 11:51:01 alias_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:14.566 11:51:01 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_config -i 00:05:14.824 11:51:01 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 149111 00:05:14.824 11:51:01 alias_rpc -- common/autotest_common.sh@948 -- # '[' -z 149111 ']' 00:05:14.824 11:51:01 alias_rpc -- common/autotest_common.sh@952 -- # kill -0 149111 00:05:14.824 11:51:01 alias_rpc -- common/autotest_common.sh@953 -- # uname 00:05:14.824 11:51:01 alias_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:14.824 11:51:01 alias_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 149111 00:05:14.824 11:51:01 alias_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:14.824 11:51:01 alias_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:14.824 11:51:01 alias_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 149111' 00:05:14.824 killing process with pid 149111 00:05:14.824 11:51:01 alias_rpc -- common/autotest_common.sh@967 -- # kill 149111 00:05:14.824 11:51:01 alias_rpc -- common/autotest_common.sh@972 -- # wait 149111 00:05:15.082 00:05:15.082 real 0m1.474s 00:05:15.082 user 0m1.622s 00:05:15.082 sys 0m0.385s 00:05:15.082 11:51:02 alias_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:15.082 11:51:02 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:15.082 ************************************ 00:05:15.082 END TEST alias_rpc 00:05:15.082 ************************************ 00:05:15.082 11:51:02 -- common/autotest_common.sh@1142 -- # return 0 00:05:15.082 11:51:02 -- spdk/autotest.sh@176 -- # [[ 0 -eq 0 ]] 00:05:15.082 11:51:02 -- spdk/autotest.sh@177 -- # run_test spdkcli_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:15.082 11:51:02 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:15.082 11:51:02 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:15.082 11:51:02 -- common/autotest_common.sh@10 -- # set +x 00:05:15.082 ************************************ 00:05:15.082 START TEST spdkcli_tcp 00:05:15.082 ************************************ 00:05:15.083 11:51:02 spdkcli_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/tcp.sh 00:05:15.340 * Looking for test storage... 00:05:15.340 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:05:15.340 11:51:02 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:05:15.340 11:51:02 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:05:15.340 11:51:02 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:05:15.340 11:51:02 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:05:15.340 11:51:02 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:05:15.340 11:51:02 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:05:15.340 11:51:02 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:05:15.340 11:51:02 spdkcli_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:05:15.340 11:51:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.340 11:51:02 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=149531 00:05:15.340 11:51:02 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 149531 00:05:15.340 11:51:02 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:05:15.340 11:51:02 spdkcli_tcp -- common/autotest_common.sh@829 -- # '[' -z 149531 ']' 00:05:15.340 11:51:02 spdkcli_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:15.340 11:51:02 spdkcli_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:15.340 11:51:02 spdkcli_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:15.340 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:15.340 11:51:02 spdkcli_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:15.340 11:51:02 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:15.341 [2024-07-25 11:51:02.431714] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:15.341 [2024-07-25 11:51:02.431764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149531 ] 00:05:15.341 EAL: No free 2048 kB hugepages reported on node 1 00:05:15.341 [2024-07-25 11:51:02.485411] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:15.341 [2024-07-25 11:51:02.559525] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:15.341 [2024-07-25 11:51:02.559528] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:16.278 11:51:03 spdkcli_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:16.278 11:51:03 spdkcli_tcp -- common/autotest_common.sh@862 -- # return 0 00:05:16.278 11:51:03 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:05:16.278 11:51:03 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=149547 00:05:16.278 11:51:03 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:05:16.278 [ 00:05:16.278 "bdev_malloc_delete", 00:05:16.278 "bdev_malloc_create", 00:05:16.278 "bdev_null_resize", 00:05:16.278 "bdev_null_delete", 00:05:16.278 "bdev_null_create", 00:05:16.278 "bdev_nvme_cuse_unregister", 00:05:16.278 "bdev_nvme_cuse_register", 00:05:16.278 "bdev_opal_new_user", 00:05:16.278 "bdev_opal_set_lock_state", 00:05:16.278 "bdev_opal_delete", 00:05:16.278 "bdev_opal_get_info", 00:05:16.278 "bdev_opal_create", 00:05:16.278 "bdev_nvme_opal_revert", 00:05:16.279 "bdev_nvme_opal_init", 00:05:16.279 "bdev_nvme_send_cmd", 00:05:16.279 "bdev_nvme_get_path_iostat", 00:05:16.279 "bdev_nvme_get_mdns_discovery_info", 00:05:16.279 "bdev_nvme_stop_mdns_discovery", 00:05:16.279 "bdev_nvme_start_mdns_discovery", 00:05:16.279 "bdev_nvme_set_multipath_policy", 00:05:16.279 "bdev_nvme_set_preferred_path", 00:05:16.279 "bdev_nvme_get_io_paths", 00:05:16.279 "bdev_nvme_remove_error_injection", 00:05:16.279 "bdev_nvme_add_error_injection", 00:05:16.279 "bdev_nvme_get_discovery_info", 00:05:16.279 "bdev_nvme_stop_discovery", 00:05:16.279 "bdev_nvme_start_discovery", 00:05:16.279 "bdev_nvme_get_controller_health_info", 00:05:16.279 "bdev_nvme_disable_controller", 00:05:16.279 "bdev_nvme_enable_controller", 00:05:16.279 "bdev_nvme_reset_controller", 00:05:16.279 "bdev_nvme_get_transport_statistics", 00:05:16.279 "bdev_nvme_apply_firmware", 00:05:16.279 "bdev_nvme_detach_controller", 00:05:16.279 "bdev_nvme_get_controllers", 00:05:16.279 "bdev_nvme_attach_controller", 00:05:16.279 "bdev_nvme_set_hotplug", 00:05:16.279 "bdev_nvme_set_options", 00:05:16.279 "bdev_passthru_delete", 00:05:16.279 "bdev_passthru_create", 00:05:16.279 "bdev_lvol_set_parent_bdev", 00:05:16.279 "bdev_lvol_set_parent", 00:05:16.279 "bdev_lvol_check_shallow_copy", 00:05:16.279 "bdev_lvol_start_shallow_copy", 00:05:16.279 "bdev_lvol_grow_lvstore", 00:05:16.279 "bdev_lvol_get_lvols", 00:05:16.279 "bdev_lvol_get_lvstores", 00:05:16.279 "bdev_lvol_delete", 00:05:16.279 "bdev_lvol_set_read_only", 00:05:16.279 "bdev_lvol_resize", 00:05:16.279 "bdev_lvol_decouple_parent", 00:05:16.279 "bdev_lvol_inflate", 00:05:16.279 "bdev_lvol_rename", 00:05:16.279 "bdev_lvol_clone_bdev", 00:05:16.279 "bdev_lvol_clone", 00:05:16.279 "bdev_lvol_snapshot", 00:05:16.279 "bdev_lvol_create", 00:05:16.279 "bdev_lvol_delete_lvstore", 00:05:16.279 "bdev_lvol_rename_lvstore", 00:05:16.279 "bdev_lvol_create_lvstore", 00:05:16.279 "bdev_raid_set_options", 00:05:16.279 "bdev_raid_remove_base_bdev", 00:05:16.279 "bdev_raid_add_base_bdev", 00:05:16.279 "bdev_raid_delete", 00:05:16.279 "bdev_raid_create", 00:05:16.279 "bdev_raid_get_bdevs", 00:05:16.279 "bdev_error_inject_error", 00:05:16.279 "bdev_error_delete", 00:05:16.279 "bdev_error_create", 00:05:16.279 "bdev_split_delete", 00:05:16.279 "bdev_split_create", 00:05:16.279 "bdev_delay_delete", 00:05:16.279 "bdev_delay_create", 00:05:16.279 "bdev_delay_update_latency", 00:05:16.279 "bdev_zone_block_delete", 00:05:16.279 "bdev_zone_block_create", 00:05:16.279 "blobfs_create", 00:05:16.279 "blobfs_detect", 00:05:16.279 "blobfs_set_cache_size", 00:05:16.279 "bdev_aio_delete", 00:05:16.279 "bdev_aio_rescan", 00:05:16.279 "bdev_aio_create", 00:05:16.279 "bdev_ftl_set_property", 00:05:16.279 "bdev_ftl_get_properties", 00:05:16.279 "bdev_ftl_get_stats", 00:05:16.279 "bdev_ftl_unmap", 00:05:16.279 "bdev_ftl_unload", 00:05:16.279 "bdev_ftl_delete", 00:05:16.279 "bdev_ftl_load", 00:05:16.280 "bdev_ftl_create", 00:05:16.280 "bdev_virtio_attach_controller", 00:05:16.280 "bdev_virtio_scsi_get_devices", 00:05:16.280 "bdev_virtio_detach_controller", 00:05:16.280 "bdev_virtio_blk_set_hotplug", 00:05:16.280 "bdev_iscsi_delete", 00:05:16.280 "bdev_iscsi_create", 00:05:16.280 "bdev_iscsi_set_options", 00:05:16.280 "accel_error_inject_error", 00:05:16.280 "ioat_scan_accel_module", 00:05:16.280 "dsa_scan_accel_module", 00:05:16.280 "iaa_scan_accel_module", 00:05:16.280 "vfu_virtio_create_scsi_endpoint", 00:05:16.280 "vfu_virtio_scsi_remove_target", 00:05:16.280 "vfu_virtio_scsi_add_target", 00:05:16.280 "vfu_virtio_create_blk_endpoint", 00:05:16.280 "vfu_virtio_delete_endpoint", 00:05:16.280 "keyring_file_remove_key", 00:05:16.280 "keyring_file_add_key", 00:05:16.280 "keyring_linux_set_options", 00:05:16.280 "iscsi_get_histogram", 00:05:16.280 "iscsi_enable_histogram", 00:05:16.280 "iscsi_set_options", 00:05:16.280 "iscsi_get_auth_groups", 00:05:16.280 "iscsi_auth_group_remove_secret", 00:05:16.280 "iscsi_auth_group_add_secret", 00:05:16.280 "iscsi_delete_auth_group", 00:05:16.280 "iscsi_create_auth_group", 00:05:16.280 "iscsi_set_discovery_auth", 00:05:16.280 "iscsi_get_options", 00:05:16.280 "iscsi_target_node_request_logout", 00:05:16.280 "iscsi_target_node_set_redirect", 00:05:16.280 "iscsi_target_node_set_auth", 00:05:16.280 "iscsi_target_node_add_lun", 00:05:16.280 "iscsi_get_stats", 00:05:16.280 "iscsi_get_connections", 00:05:16.280 "iscsi_portal_group_set_auth", 00:05:16.280 "iscsi_start_portal_group", 00:05:16.280 "iscsi_delete_portal_group", 00:05:16.280 "iscsi_create_portal_group", 00:05:16.280 "iscsi_get_portal_groups", 00:05:16.280 "iscsi_delete_target_node", 00:05:16.280 "iscsi_target_node_remove_pg_ig_maps", 00:05:16.280 "iscsi_target_node_add_pg_ig_maps", 00:05:16.280 "iscsi_create_target_node", 00:05:16.280 "iscsi_get_target_nodes", 00:05:16.280 "iscsi_delete_initiator_group", 00:05:16.280 "iscsi_initiator_group_remove_initiators", 00:05:16.280 "iscsi_initiator_group_add_initiators", 00:05:16.280 "iscsi_create_initiator_group", 00:05:16.280 "iscsi_get_initiator_groups", 00:05:16.280 "nvmf_set_crdt", 00:05:16.280 "nvmf_set_config", 00:05:16.280 "nvmf_set_max_subsystems", 00:05:16.280 "nvmf_stop_mdns_prr", 00:05:16.280 "nvmf_publish_mdns_prr", 00:05:16.280 "nvmf_subsystem_get_listeners", 00:05:16.280 "nvmf_subsystem_get_qpairs", 00:05:16.281 "nvmf_subsystem_get_controllers", 00:05:16.281 "nvmf_get_stats", 00:05:16.281 "nvmf_get_transports", 00:05:16.281 "nvmf_create_transport", 00:05:16.281 "nvmf_get_targets", 00:05:16.281 "nvmf_delete_target", 00:05:16.281 "nvmf_create_target", 00:05:16.281 "nvmf_subsystem_allow_any_host", 00:05:16.281 "nvmf_subsystem_remove_host", 00:05:16.281 "nvmf_subsystem_add_host", 00:05:16.281 "nvmf_ns_remove_host", 00:05:16.281 "nvmf_ns_add_host", 00:05:16.281 "nvmf_subsystem_remove_ns", 00:05:16.281 "nvmf_subsystem_add_ns", 00:05:16.281 "nvmf_subsystem_listener_set_ana_state", 00:05:16.281 "nvmf_discovery_get_referrals", 00:05:16.281 "nvmf_discovery_remove_referral", 00:05:16.281 "nvmf_discovery_add_referral", 00:05:16.281 "nvmf_subsystem_remove_listener", 00:05:16.281 "nvmf_subsystem_add_listener", 00:05:16.281 "nvmf_delete_subsystem", 00:05:16.281 "nvmf_create_subsystem", 00:05:16.281 "nvmf_get_subsystems", 00:05:16.281 "env_dpdk_get_mem_stats", 00:05:16.281 "nbd_get_disks", 00:05:16.281 "nbd_stop_disk", 00:05:16.281 "nbd_start_disk", 00:05:16.281 "ublk_recover_disk", 00:05:16.281 "ublk_get_disks", 00:05:16.281 "ublk_stop_disk", 00:05:16.281 "ublk_start_disk", 00:05:16.281 "ublk_destroy_target", 00:05:16.281 "ublk_create_target", 00:05:16.281 "virtio_blk_create_transport", 00:05:16.281 "virtio_blk_get_transports", 00:05:16.281 "vhost_controller_set_coalescing", 00:05:16.281 "vhost_get_controllers", 00:05:16.281 "vhost_delete_controller", 00:05:16.281 "vhost_create_blk_controller", 00:05:16.281 "vhost_scsi_controller_remove_target", 00:05:16.281 "vhost_scsi_controller_add_target", 00:05:16.281 "vhost_start_scsi_controller", 00:05:16.281 "vhost_create_scsi_controller", 00:05:16.281 "thread_set_cpumask", 00:05:16.281 "framework_get_governor", 00:05:16.281 "framework_get_scheduler", 00:05:16.281 "framework_set_scheduler", 00:05:16.281 "framework_get_reactors", 00:05:16.281 "thread_get_io_channels", 00:05:16.281 "thread_get_pollers", 00:05:16.281 "thread_get_stats", 00:05:16.281 "framework_monitor_context_switch", 00:05:16.281 "spdk_kill_instance", 00:05:16.281 "log_enable_timestamps", 00:05:16.281 "log_get_flags", 00:05:16.281 "log_clear_flag", 00:05:16.281 "log_set_flag", 00:05:16.281 "log_get_level", 00:05:16.281 "log_set_level", 00:05:16.281 "log_get_print_level", 00:05:16.281 "log_set_print_level", 00:05:16.282 "framework_enable_cpumask_locks", 00:05:16.282 "framework_disable_cpumask_locks", 00:05:16.282 "framework_wait_init", 00:05:16.282 "framework_start_init", 00:05:16.282 "scsi_get_devices", 00:05:16.282 "bdev_get_histogram", 00:05:16.282 "bdev_enable_histogram", 00:05:16.282 "bdev_set_qos_limit", 00:05:16.282 "bdev_set_qd_sampling_period", 00:05:16.282 "bdev_get_bdevs", 00:05:16.282 "bdev_reset_iostat", 00:05:16.282 "bdev_get_iostat", 00:05:16.282 "bdev_examine", 00:05:16.282 "bdev_wait_for_examine", 00:05:16.282 "bdev_set_options", 00:05:16.282 "notify_get_notifications", 00:05:16.282 "notify_get_types", 00:05:16.282 "accel_get_stats", 00:05:16.282 "accel_set_options", 00:05:16.282 "accel_set_driver", 00:05:16.282 "accel_crypto_key_destroy", 00:05:16.282 "accel_crypto_keys_get", 00:05:16.282 "accel_crypto_key_create", 00:05:16.282 "accel_assign_opc", 00:05:16.282 "accel_get_module_info", 00:05:16.282 "accel_get_opc_assignments", 00:05:16.282 "vmd_rescan", 00:05:16.282 "vmd_remove_device", 00:05:16.282 "vmd_enable", 00:05:16.282 "sock_get_default_impl", 00:05:16.282 "sock_set_default_impl", 00:05:16.282 "sock_impl_set_options", 00:05:16.282 "sock_impl_get_options", 00:05:16.282 "iobuf_get_stats", 00:05:16.282 "iobuf_set_options", 00:05:16.282 "keyring_get_keys", 00:05:16.282 "framework_get_pci_devices", 00:05:16.282 "framework_get_config", 00:05:16.282 "framework_get_subsystems", 00:05:16.282 "vfu_tgt_set_base_path", 00:05:16.282 "trace_get_info", 00:05:16.282 "trace_get_tpoint_group_mask", 00:05:16.282 "trace_disable_tpoint_group", 00:05:16.282 "trace_enable_tpoint_group", 00:05:16.282 "trace_clear_tpoint_mask", 00:05:16.282 "trace_set_tpoint_mask", 00:05:16.282 "spdk_get_version", 00:05:16.282 "rpc_get_methods" 00:05:16.282 ] 00:05:16.282 11:51:03 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:05:16.282 11:51:03 spdkcli_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:05:16.282 11:51:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.282 11:51:03 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:05:16.282 11:51:03 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 149531 00:05:16.282 11:51:03 spdkcli_tcp -- common/autotest_common.sh@948 -- # '[' -z 149531 ']' 00:05:16.283 11:51:03 spdkcli_tcp -- common/autotest_common.sh@952 -- # kill -0 149531 00:05:16.283 11:51:03 spdkcli_tcp -- common/autotest_common.sh@953 -- # uname 00:05:16.283 11:51:03 spdkcli_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:16.283 11:51:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 149531 00:05:16.283 11:51:03 spdkcli_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:16.283 11:51:03 spdkcli_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:16.283 11:51:03 spdkcli_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 149531' 00:05:16.283 killing process with pid 149531 00:05:16.283 11:51:03 spdkcli_tcp -- common/autotest_common.sh@967 -- # kill 149531 00:05:16.283 11:51:03 spdkcli_tcp -- common/autotest_common.sh@972 -- # wait 149531 00:05:16.851 00:05:16.851 real 0m1.514s 00:05:16.851 user 0m2.816s 00:05:16.851 sys 0m0.436s 00:05:16.851 11:51:03 spdkcli_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:16.851 11:51:03 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:05:16.851 ************************************ 00:05:16.851 END TEST spdkcli_tcp 00:05:16.851 ************************************ 00:05:16.851 11:51:03 -- common/autotest_common.sh@1142 -- # return 0 00:05:16.851 11:51:03 -- spdk/autotest.sh@180 -- # run_test dpdk_mem_utility /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:16.851 11:51:03 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:16.851 11:51:03 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:16.851 11:51:03 -- common/autotest_common.sh@10 -- # set +x 00:05:16.851 ************************************ 00:05:16.851 START TEST dpdk_mem_utility 00:05:16.851 ************************************ 00:05:16.851 11:51:03 dpdk_mem_utility -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:05:16.851 * Looking for test storage... 00:05:16.851 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/dpdk_memory_utility 00:05:16.851 11:51:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:16.851 11:51:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=149833 00:05:16.851 11:51:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 149833 00:05:16.851 11:51:03 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:05:16.851 11:51:03 dpdk_mem_utility -- common/autotest_common.sh@829 -- # '[' -z 149833 ']' 00:05:16.851 11:51:03 dpdk_mem_utility -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.851 11:51:03 dpdk_mem_utility -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:16.851 11:51:03 dpdk_mem_utility -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.851 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.851 11:51:03 dpdk_mem_utility -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:16.851 11:51:03 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:16.851 [2024-07-25 11:51:03.999219] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:16.851 [2024-07-25 11:51:03.999263] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid149833 ] 00:05:16.851 EAL: No free 2048 kB hugepages reported on node 1 00:05:16.851 [2024-07-25 11:51:04.052640] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.110 [2024-07-25 11:51:04.134395] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.762 11:51:04 dpdk_mem_utility -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:17.762 11:51:04 dpdk_mem_utility -- common/autotest_common.sh@862 -- # return 0 00:05:17.762 11:51:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:05:17.762 11:51:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:05:17.762 11:51:04 dpdk_mem_utility -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:17.762 11:51:04 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:17.762 { 00:05:17.762 "filename": "/tmp/spdk_mem_dump.txt" 00:05:17.762 } 00:05:17.762 11:51:04 dpdk_mem_utility -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:17.762 11:51:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py 00:05:17.762 DPDK memory size 814.000000 MiB in 1 heap(s) 00:05:17.762 1 heaps totaling size 814.000000 MiB 00:05:17.762 size: 814.000000 MiB heap id: 0 00:05:17.762 end heaps---------- 00:05:17.762 8 mempools totaling size 598.116089 MiB 00:05:17.762 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:05:17.762 size: 158.602051 MiB name: PDU_data_out_Pool 00:05:17.762 size: 84.521057 MiB name: bdev_io_149833 00:05:17.762 size: 51.011292 MiB name: evtpool_149833 00:05:17.762 size: 50.003479 MiB name: msgpool_149833 00:05:17.762 size: 21.763794 MiB name: PDU_Pool 00:05:17.762 size: 19.513306 MiB name: SCSI_TASK_Pool 00:05:17.762 size: 0.026123 MiB name: Session_Pool 00:05:17.762 end mempools------- 00:05:17.762 6 memzones totaling size 4.142822 MiB 00:05:17.762 size: 1.000366 MiB name: RG_ring_0_149833 00:05:17.762 size: 1.000366 MiB name: RG_ring_1_149833 00:05:17.762 size: 1.000366 MiB name: RG_ring_4_149833 00:05:17.762 size: 1.000366 MiB name: RG_ring_5_149833 00:05:17.762 size: 0.125366 MiB name: RG_ring_2_149833 00:05:17.763 size: 0.015991 MiB name: RG_ring_3_149833 00:05:17.763 end memzones------- 00:05:17.763 11:51:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/dpdk_mem_info.py -m 0 00:05:17.763 heap id: 0 total size: 814.000000 MiB number of busy elements: 41 number of free elements: 15 00:05:17.763 list of free elements. size: 12.519348 MiB 00:05:17.763 element at address: 0x200000400000 with size: 1.999512 MiB 00:05:17.763 element at address: 0x200018e00000 with size: 0.999878 MiB 00:05:17.763 element at address: 0x200019000000 with size: 0.999878 MiB 00:05:17.763 element at address: 0x200003e00000 with size: 0.996277 MiB 00:05:17.763 element at address: 0x200031c00000 with size: 0.994446 MiB 00:05:17.763 element at address: 0x200013800000 with size: 0.978699 MiB 00:05:17.763 element at address: 0x200007000000 with size: 0.959839 MiB 00:05:17.763 element at address: 0x200019200000 with size: 0.936584 MiB 00:05:17.763 element at address: 0x200000200000 with size: 0.841614 MiB 00:05:17.763 element at address: 0x20001aa00000 with size: 0.582886 MiB 00:05:17.763 element at address: 0x20000b200000 with size: 0.490723 MiB 00:05:17.763 element at address: 0x200000800000 with size: 0.487793 MiB 00:05:17.763 element at address: 0x200019400000 with size: 0.485657 MiB 00:05:17.763 element at address: 0x200027e00000 with size: 0.410034 MiB 00:05:17.763 element at address: 0x200003a00000 with size: 0.355530 MiB 00:05:17.763 list of standard malloc elements. size: 199.218079 MiB 00:05:17.763 element at address: 0x20000b3fff80 with size: 132.000122 MiB 00:05:17.763 element at address: 0x2000071fff80 with size: 64.000122 MiB 00:05:17.763 element at address: 0x200018efff80 with size: 1.000122 MiB 00:05:17.763 element at address: 0x2000190fff80 with size: 1.000122 MiB 00:05:17.763 element at address: 0x2000192fff80 with size: 1.000122 MiB 00:05:17.763 element at address: 0x2000003d9f00 with size: 0.140747 MiB 00:05:17.763 element at address: 0x2000192eff00 with size: 0.062622 MiB 00:05:17.763 element at address: 0x2000003fdf80 with size: 0.007935 MiB 00:05:17.763 element at address: 0x2000192efdc0 with size: 0.000305 MiB 00:05:17.763 element at address: 0x2000002d7740 with size: 0.000183 MiB 00:05:17.763 element at address: 0x2000002d7800 with size: 0.000183 MiB 00:05:17.763 element at address: 0x2000002d78c0 with size: 0.000183 MiB 00:05:17.763 element at address: 0x2000002d7ac0 with size: 0.000183 MiB 00:05:17.763 element at address: 0x2000002d7b80 with size: 0.000183 MiB 00:05:17.763 element at address: 0x2000002d7c40 with size: 0.000183 MiB 00:05:17.763 element at address: 0x2000003d9e40 with size: 0.000183 MiB 00:05:17.763 element at address: 0x20000087ce00 with size: 0.000183 MiB 00:05:17.763 element at address: 0x20000087cec0 with size: 0.000183 MiB 00:05:17.763 element at address: 0x2000008fd180 with size: 0.000183 MiB 00:05:17.763 element at address: 0x200003a5b040 with size: 0.000183 MiB 00:05:17.763 element at address: 0x200003adb300 with size: 0.000183 MiB 00:05:17.763 element at address: 0x200003adb500 with size: 0.000183 MiB 00:05:17.763 element at address: 0x200003adf7c0 with size: 0.000183 MiB 00:05:17.763 element at address: 0x200003affa80 with size: 0.000183 MiB 00:05:17.763 element at address: 0x200003affb40 with size: 0.000183 MiB 00:05:17.763 element at address: 0x200003eff0c0 with size: 0.000183 MiB 00:05:17.763 element at address: 0x2000070fdd80 with size: 0.000183 MiB 00:05:17.763 element at address: 0x20000b27da00 with size: 0.000183 MiB 00:05:17.763 element at address: 0x20000b27dac0 with size: 0.000183 MiB 00:05:17.763 element at address: 0x20000b2fdd80 with size: 0.000183 MiB 00:05:17.763 element at address: 0x2000138fa8c0 with size: 0.000183 MiB 00:05:17.763 element at address: 0x2000192efc40 with size: 0.000183 MiB 00:05:17.763 element at address: 0x2000192efd00 with size: 0.000183 MiB 00:05:17.763 element at address: 0x2000194bc740 with size: 0.000183 MiB 00:05:17.763 element at address: 0x20001aa95380 with size: 0.000183 MiB 00:05:17.763 element at address: 0x20001aa95440 with size: 0.000183 MiB 00:05:17.763 element at address: 0x200027e68f80 with size: 0.000183 MiB 00:05:17.763 element at address: 0x200027e69040 with size: 0.000183 MiB 00:05:17.763 element at address: 0x200027e6fc40 with size: 0.000183 MiB 00:05:17.763 element at address: 0x200027e6fe40 with size: 0.000183 MiB 00:05:17.763 element at address: 0x200027e6ff00 with size: 0.000183 MiB 00:05:17.763 list of memzone associated elements. size: 602.262573 MiB 00:05:17.763 element at address: 0x20001aa95500 with size: 211.416748 MiB 00:05:17.763 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:05:17.763 element at address: 0x200027e6ffc0 with size: 157.562561 MiB 00:05:17.763 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:05:17.763 element at address: 0x2000139fab80 with size: 84.020630 MiB 00:05:17.763 associated memzone info: size: 84.020508 MiB name: MP_bdev_io_149833_0 00:05:17.763 element at address: 0x2000009ff380 with size: 48.003052 MiB 00:05:17.763 associated memzone info: size: 48.002930 MiB name: MP_evtpool_149833_0 00:05:17.763 element at address: 0x200003fff380 with size: 48.003052 MiB 00:05:17.763 associated memzone info: size: 48.002930 MiB name: MP_msgpool_149833_0 00:05:17.763 element at address: 0x2000195be940 with size: 20.255554 MiB 00:05:17.763 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:05:17.763 element at address: 0x200031dfeb40 with size: 18.005066 MiB 00:05:17.763 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:05:17.763 element at address: 0x2000005ffe00 with size: 2.000488 MiB 00:05:17.763 associated memzone info: size: 2.000366 MiB name: RG_MP_evtpool_149833 00:05:17.763 element at address: 0x200003bffe00 with size: 2.000488 MiB 00:05:17.763 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_149833 00:05:17.763 element at address: 0x2000002d7d00 with size: 1.008118 MiB 00:05:17.763 associated memzone info: size: 1.007996 MiB name: MP_evtpool_149833 00:05:17.763 element at address: 0x20000b2fde40 with size: 1.008118 MiB 00:05:17.763 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:05:17.763 element at address: 0x2000194bc800 with size: 1.008118 MiB 00:05:17.763 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:05:17.763 element at address: 0x2000070fde40 with size: 1.008118 MiB 00:05:17.763 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:05:17.763 element at address: 0x2000008fd240 with size: 1.008118 MiB 00:05:17.763 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:05:17.763 element at address: 0x200003eff180 with size: 1.000488 MiB 00:05:17.763 associated memzone info: size: 1.000366 MiB name: RG_ring_0_149833 00:05:17.763 element at address: 0x200003affc00 with size: 1.000488 MiB 00:05:17.763 associated memzone info: size: 1.000366 MiB name: RG_ring_1_149833 00:05:17.763 element at address: 0x2000138fa980 with size: 1.000488 MiB 00:05:17.763 associated memzone info: size: 1.000366 MiB name: RG_ring_4_149833 00:05:17.763 element at address: 0x200031cfe940 with size: 1.000488 MiB 00:05:17.763 associated memzone info: size: 1.000366 MiB name: RG_ring_5_149833 00:05:17.763 element at address: 0x200003a5b100 with size: 0.500488 MiB 00:05:17.763 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_149833 00:05:17.763 element at address: 0x20000b27db80 with size: 0.500488 MiB 00:05:17.763 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:05:17.763 element at address: 0x20000087cf80 with size: 0.500488 MiB 00:05:17.763 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:05:17.763 element at address: 0x20001947c540 with size: 0.250488 MiB 00:05:17.763 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:05:17.763 element at address: 0x200003adf880 with size: 0.125488 MiB 00:05:17.763 associated memzone info: size: 0.125366 MiB name: RG_ring_2_149833 00:05:17.763 element at address: 0x2000070f5b80 with size: 0.031738 MiB 00:05:17.763 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:05:17.763 element at address: 0x200027e69100 with size: 0.023743 MiB 00:05:17.763 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:05:17.763 element at address: 0x200003adb5c0 with size: 0.016113 MiB 00:05:17.763 associated memzone info: size: 0.015991 MiB name: RG_ring_3_149833 00:05:17.763 element at address: 0x200027e6f240 with size: 0.002441 MiB 00:05:17.763 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:05:17.763 element at address: 0x2000002d7980 with size: 0.000305 MiB 00:05:17.763 associated memzone info: size: 0.000183 MiB name: MP_msgpool_149833 00:05:17.763 element at address: 0x200003adb3c0 with size: 0.000305 MiB 00:05:17.763 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_149833 00:05:17.763 element at address: 0x200027e6fd00 with size: 0.000305 MiB 00:05:17.763 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:05:17.763 11:51:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:05:17.763 11:51:04 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 149833 00:05:17.763 11:51:04 dpdk_mem_utility -- common/autotest_common.sh@948 -- # '[' -z 149833 ']' 00:05:17.763 11:51:04 dpdk_mem_utility -- common/autotest_common.sh@952 -- # kill -0 149833 00:05:17.763 11:51:04 dpdk_mem_utility -- common/autotest_common.sh@953 -- # uname 00:05:17.763 11:51:04 dpdk_mem_utility -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:17.763 11:51:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 149833 00:05:17.763 11:51:04 dpdk_mem_utility -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:17.763 11:51:04 dpdk_mem_utility -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:17.763 11:51:04 dpdk_mem_utility -- common/autotest_common.sh@966 -- # echo 'killing process with pid 149833' 00:05:17.763 killing process with pid 149833 00:05:17.763 11:51:04 dpdk_mem_utility -- common/autotest_common.sh@967 -- # kill 149833 00:05:17.763 11:51:04 dpdk_mem_utility -- common/autotest_common.sh@972 -- # wait 149833 00:05:18.022 00:05:18.022 real 0m1.380s 00:05:18.022 user 0m1.468s 00:05:18.022 sys 0m0.371s 00:05:18.022 11:51:05 dpdk_mem_utility -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:18.022 11:51:05 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:05:18.022 ************************************ 00:05:18.022 END TEST dpdk_mem_utility 00:05:18.022 ************************************ 00:05:18.281 11:51:05 -- common/autotest_common.sh@1142 -- # return 0 00:05:18.281 11:51:05 -- spdk/autotest.sh@181 -- # run_test event /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:18.281 11:51:05 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:18.281 11:51:05 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.281 11:51:05 -- common/autotest_common.sh@10 -- # set +x 00:05:18.281 ************************************ 00:05:18.281 START TEST event 00:05:18.281 ************************************ 00:05:18.281 11:51:05 event -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event.sh 00:05:18.281 * Looking for test storage... 00:05:18.281 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:18.281 11:51:05 event -- event/event.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/nbd_common.sh 00:05:18.281 11:51:05 event -- bdev/nbd_common.sh@6 -- # set -e 00:05:18.281 11:51:05 event -- event/event.sh@45 -- # run_test event_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:18.281 11:51:05 event -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:05:18.281 11:51:05 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:18.281 11:51:05 event -- common/autotest_common.sh@10 -- # set +x 00:05:18.281 ************************************ 00:05:18.281 START TEST event_perf 00:05:18.281 ************************************ 00:05:18.281 11:51:05 event.event_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:05:18.281 Running I/O for 1 seconds...[2024-07-25 11:51:05.442299] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:18.281 [2024-07-25 11:51:05.442370] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150119 ] 00:05:18.281 EAL: No free 2048 kB hugepages reported on node 1 00:05:18.281 [2024-07-25 11:51:05.500348] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:18.540 [2024-07-25 11:51:05.583764] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:18.540 [2024-07-25 11:51:05.583858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:18.540 [2024-07-25 11:51:05.583956] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:18.540 [2024-07-25 11:51:05.583958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:19.474 Running I/O for 1 seconds... 00:05:19.474 lcore 0: 202615 00:05:19.474 lcore 1: 202611 00:05:19.474 lcore 2: 202612 00:05:19.474 lcore 3: 202613 00:05:19.474 done. 00:05:19.474 00:05:19.474 real 0m1.234s 00:05:19.474 user 0m4.150s 00:05:19.474 sys 0m0.079s 00:05:19.474 11:51:06 event.event_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:19.474 11:51:06 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:05:19.474 ************************************ 00:05:19.474 END TEST event_perf 00:05:19.474 ************************************ 00:05:19.474 11:51:06 event -- common/autotest_common.sh@1142 -- # return 0 00:05:19.474 11:51:06 event -- event/event.sh@46 -- # run_test event_reactor /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:19.474 11:51:06 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:19.474 11:51:06 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:19.474 11:51:06 event -- common/autotest_common.sh@10 -- # set +x 00:05:19.474 ************************************ 00:05:19.474 START TEST event_reactor 00:05:19.474 ************************************ 00:05:19.474 11:51:06 event.event_reactor -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor/reactor -t 1 00:05:19.733 [2024-07-25 11:51:06.737822] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:19.733 [2024-07-25 11:51:06.737887] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150381 ] 00:05:19.733 EAL: No free 2048 kB hugepages reported on node 1 00:05:19.733 [2024-07-25 11:51:06.794665] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.733 [2024-07-25 11:51:06.866627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.704 test_start 00:05:20.704 oneshot 00:05:20.704 tick 100 00:05:20.704 tick 100 00:05:20.704 tick 250 00:05:20.704 tick 100 00:05:20.704 tick 100 00:05:20.704 tick 100 00:05:20.704 tick 250 00:05:20.704 tick 500 00:05:20.704 tick 100 00:05:20.704 tick 100 00:05:20.704 tick 250 00:05:20.704 tick 100 00:05:20.704 tick 100 00:05:20.704 test_end 00:05:20.704 00:05:20.704 real 0m1.217s 00:05:20.704 user 0m1.143s 00:05:20.704 sys 0m0.069s 00:05:20.704 11:51:07 event.event_reactor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:20.704 11:51:07 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:05:20.704 ************************************ 00:05:20.704 END TEST event_reactor 00:05:20.704 ************************************ 00:05:20.962 11:51:07 event -- common/autotest_common.sh@1142 -- # return 0 00:05:20.962 11:51:07 event -- event/event.sh@47 -- # run_test event_reactor_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.962 11:51:07 event -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:05:20.962 11:51:07 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:20.962 11:51:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:20.962 ************************************ 00:05:20.962 START TEST event_reactor_perf 00:05:20.962 ************************************ 00:05:20.962 11:51:07 event.event_reactor_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/reactor_perf/reactor_perf -t 1 00:05:20.962 [2024-07-25 11:51:08.019990] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:20.962 [2024-07-25 11:51:08.020063] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid150633 ] 00:05:20.962 EAL: No free 2048 kB hugepages reported on node 1 00:05:20.962 [2024-07-25 11:51:08.077717] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.962 [2024-07-25 11:51:08.150643] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.338 test_start 00:05:22.338 test_end 00:05:22.339 Performance: 483799 events per second 00:05:22.339 00:05:22.339 real 0m1.222s 00:05:22.339 user 0m1.144s 00:05:22.339 sys 0m0.073s 00:05:22.339 11:51:09 event.event_reactor_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:22.339 11:51:09 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:05:22.339 ************************************ 00:05:22.339 END TEST event_reactor_perf 00:05:22.339 ************************************ 00:05:22.339 11:51:09 event -- common/autotest_common.sh@1142 -- # return 0 00:05:22.339 11:51:09 event -- event/event.sh@49 -- # uname -s 00:05:22.339 11:51:09 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:05:22.339 11:51:09 event -- event/event.sh@50 -- # run_test event_scheduler /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:22.339 11:51:09 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:22.339 11:51:09 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:22.339 11:51:09 event -- common/autotest_common.sh@10 -- # set +x 00:05:22.339 ************************************ 00:05:22.339 START TEST event_scheduler 00:05:22.339 ************************************ 00:05:22.339 11:51:09 event.event_scheduler -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler.sh 00:05:22.339 * Looking for test storage... 00:05:22.339 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler 00:05:22.339 11:51:09 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:05:22.339 11:51:09 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:05:22.339 11:51:09 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=151203 00:05:22.339 11:51:09 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:05:22.339 11:51:09 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 151203 00:05:22.339 11:51:09 event.event_scheduler -- common/autotest_common.sh@829 -- # '[' -z 151203 ']' 00:05:22.339 11:51:09 event.event_scheduler -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:22.339 11:51:09 event.event_scheduler -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:22.339 11:51:09 event.event_scheduler -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:22.339 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:22.339 11:51:09 event.event_scheduler -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:22.339 11:51:09 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:22.339 [2024-07-25 11:51:09.401628] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:22.339 [2024-07-25 11:51:09.401673] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid151203 ] 00:05:22.339 EAL: No free 2048 kB hugepages reported on node 1 00:05:22.339 [2024-07-25 11:51:09.451786] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:05:22.339 [2024-07-25 11:51:09.535030] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:22.339 [2024-07-25 11:51:09.535119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:22.339 [2024-07-25 11:51:09.535141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:22.339 [2024-07-25 11:51:09.535143] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:23.275 11:51:10 event.event_scheduler -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:23.275 11:51:10 event.event_scheduler -- common/autotest_common.sh@862 -- # return 0 00:05:23.275 11:51:10 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:05:23.275 11:51:10 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.275 11:51:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.275 [2024-07-25 11:51:10.241599] dpdk_governor.c: 173:_init: *ERROR*: App core mask contains some but not all of a set of SMT siblings 00:05:23.275 [2024-07-25 11:51:10.241625] scheduler_dynamic.c: 270:init: *NOTICE*: Unable to initialize dpdk governor 00:05:23.275 [2024-07-25 11:51:10.241633] scheduler_dynamic.c: 416:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:05:23.275 [2024-07-25 11:51:10.241639] scheduler_dynamic.c: 418:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:05:23.275 [2024-07-25 11:51:10.241644] scheduler_dynamic.c: 420:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:05:23.275 11:51:10 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.275 11:51:10 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:05:23.275 11:51:10 event.event_scheduler -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.275 11:51:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.275 [2024-07-25 11:51:10.313300] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:05:23.275 11:51:10 event.event_scheduler -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.275 11:51:10 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:05:23.275 11:51:10 event.event_scheduler -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:23.275 11:51:10 event.event_scheduler -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:23.275 11:51:10 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:23.275 ************************************ 00:05:23.275 START TEST scheduler_create_thread 00:05:23.275 ************************************ 00:05:23.275 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1123 -- # scheduler_create_thread 00:05:23.275 11:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:05:23.275 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.275 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.275 2 00:05:23.275 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.275 11:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:05:23.275 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.275 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.275 3 00:05:23.275 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.275 11:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.276 4 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.276 5 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.276 6 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.276 7 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.276 8 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.276 9 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.276 10 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:23.276 11:51:10 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:24.653 11:51:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:24.653 11:51:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:05:24.653 11:51:11 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:05:24.653 11:51:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:24.653 11:51:11 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.029 11:51:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:26.029 00:05:26.029 real 0m2.618s 00:05:26.029 user 0m0.023s 00:05:26.029 sys 0m0.005s 00:05:26.029 11:51:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.029 11:51:12 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:05:26.029 ************************************ 00:05:26.029 END TEST scheduler_create_thread 00:05:26.029 ************************************ 00:05:26.029 11:51:12 event.event_scheduler -- common/autotest_common.sh@1142 -- # return 0 00:05:26.029 11:51:12 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:05:26.029 11:51:12 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 151203 00:05:26.029 11:51:12 event.event_scheduler -- common/autotest_common.sh@948 -- # '[' -z 151203 ']' 00:05:26.029 11:51:12 event.event_scheduler -- common/autotest_common.sh@952 -- # kill -0 151203 00:05:26.029 11:51:12 event.event_scheduler -- common/autotest_common.sh@953 -- # uname 00:05:26.029 11:51:13 event.event_scheduler -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:26.029 11:51:13 event.event_scheduler -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 151203 00:05:26.029 11:51:13 event.event_scheduler -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:26.029 11:51:13 event.event_scheduler -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:26.029 11:51:13 event.event_scheduler -- common/autotest_common.sh@966 -- # echo 'killing process with pid 151203' 00:05:26.029 killing process with pid 151203 00:05:26.029 11:51:13 event.event_scheduler -- common/autotest_common.sh@967 -- # kill 151203 00:05:26.029 11:51:13 event.event_scheduler -- common/autotest_common.sh@972 -- # wait 151203 00:05:26.287 [2024-07-25 11:51:13.443429] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:05:26.545 00:05:26.545 real 0m4.349s 00:05:26.545 user 0m8.324s 00:05:26.545 sys 0m0.347s 00:05:26.545 11:51:13 event.event_scheduler -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:26.545 11:51:13 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:05:26.545 ************************************ 00:05:26.545 END TEST event_scheduler 00:05:26.545 ************************************ 00:05:26.545 11:51:13 event -- common/autotest_common.sh@1142 -- # return 0 00:05:26.545 11:51:13 event -- event/event.sh@51 -- # modprobe -n nbd 00:05:26.545 11:51:13 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:05:26.545 11:51:13 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:26.545 11:51:13 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:26.545 11:51:13 event -- common/autotest_common.sh@10 -- # set +x 00:05:26.545 ************************************ 00:05:26.545 START TEST app_repeat 00:05:26.545 ************************************ 00:05:26.545 11:51:13 event.app_repeat -- common/autotest_common.sh@1123 -- # app_repeat_test 00:05:26.545 11:51:13 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:26.545 11:51:13 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:26.545 11:51:13 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:05:26.545 11:51:13 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:26.545 11:51:13 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:05:26.545 11:51:13 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:05:26.545 11:51:13 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:05:26.546 11:51:13 event.app_repeat -- event/event.sh@19 -- # repeat_pid=152044 00:05:26.546 11:51:13 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:05:26.546 11:51:13 event.app_repeat -- event/event.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:05:26.546 11:51:13 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 152044' 00:05:26.546 Process app_repeat pid: 152044 00:05:26.546 11:51:13 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:26.546 11:51:13 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:05:26.546 spdk_app_start Round 0 00:05:26.546 11:51:13 event.app_repeat -- event/event.sh@25 -- # waitforlisten 152044 /var/tmp/spdk-nbd.sock 00:05:26.546 11:51:13 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 152044 ']' 00:05:26.546 11:51:13 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:26.546 11:51:13 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:26.546 11:51:13 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:26.546 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:26.546 11:51:13 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:26.546 11:51:13 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:26.546 [2024-07-25 11:51:13.745529] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:26.546 [2024-07-25 11:51:13.745581] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid152044 ] 00:05:26.546 EAL: No free 2048 kB hugepages reported on node 1 00:05:26.805 [2024-07-25 11:51:13.800962] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:26.805 [2024-07-25 11:51:13.883065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:26.805 [2024-07-25 11:51:13.883069] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.372 11:51:14 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:27.372 11:51:14 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:27.372 11:51:14 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.632 Malloc0 00:05:27.632 11:51:14 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:27.891 Malloc1 00:05:27.891 11:51:14 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.891 11:51:14 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.891 11:51:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.891 11:51:14 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:27.891 11:51:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.891 11:51:14 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:27.891 11:51:14 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:27.891 11:51:14 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:27.891 11:51:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:27.891 11:51:14 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:27.891 11:51:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:27.892 11:51:14 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:27.892 11:51:14 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:27.892 11:51:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:27.892 11:51:14 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.892 11:51:14 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:27.892 /dev/nbd0 00:05:27.892 11:51:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:27.892 11:51:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:27.892 11:51:15 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:27.892 11:51:15 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:27.892 11:51:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:27.892 11:51:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:27.892 11:51:15 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:27.892 11:51:15 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:27.892 11:51:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:27.892 11:51:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:27.892 11:51:15 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:27.892 1+0 records in 00:05:27.892 1+0 records out 00:05:27.892 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000127823 s, 32.0 MB/s 00:05:27.892 11:51:15 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.892 11:51:15 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:27.892 11:51:15 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:27.892 11:51:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:27.892 11:51:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:27.892 11:51:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:27.892 11:51:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:27.892 11:51:15 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:28.150 /dev/nbd1 00:05:28.151 11:51:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:28.151 11:51:15 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:28.151 11:51:15 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:28.151 11:51:15 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:28.151 11:51:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:28.151 11:51:15 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:28.151 11:51:15 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:28.151 11:51:15 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:28.151 11:51:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:28.151 11:51:15 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:28.151 11:51:15 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:28.151 1+0 records in 00:05:28.151 1+0 records out 00:05:28.151 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000194976 s, 21.0 MB/s 00:05:28.151 11:51:15 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:28.151 11:51:15 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:28.151 11:51:15 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:28.151 11:51:15 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:28.151 11:51:15 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:28.151 11:51:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:28.151 11:51:15 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:28.151 11:51:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.151 11:51:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.151 11:51:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:28.410 { 00:05:28.410 "nbd_device": "/dev/nbd0", 00:05:28.410 "bdev_name": "Malloc0" 00:05:28.410 }, 00:05:28.410 { 00:05:28.410 "nbd_device": "/dev/nbd1", 00:05:28.410 "bdev_name": "Malloc1" 00:05:28.410 } 00:05:28.410 ]' 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:28.410 { 00:05:28.410 "nbd_device": "/dev/nbd0", 00:05:28.410 "bdev_name": "Malloc0" 00:05:28.410 }, 00:05:28.410 { 00:05:28.410 "nbd_device": "/dev/nbd1", 00:05:28.410 "bdev_name": "Malloc1" 00:05:28.410 } 00:05:28.410 ]' 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:28.410 /dev/nbd1' 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:28.410 /dev/nbd1' 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:28.410 256+0 records in 00:05:28.410 256+0 records out 00:05:28.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103231 s, 102 MB/s 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:28.410 256+0 records in 00:05:28.410 256+0 records out 00:05:28.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0141058 s, 74.3 MB/s 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:28.410 256+0 records in 00:05:28.410 256+0 records out 00:05:28.410 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0154334 s, 67.9 MB/s 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.410 11:51:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:28.669 11:51:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:28.669 11:51:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:28.669 11:51:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:28.669 11:51:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.669 11:51:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.669 11:51:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:28.669 11:51:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.669 11:51:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.669 11:51:15 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:28.669 11:51:15 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:28.928 11:51:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:28.928 11:51:15 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:28.928 11:51:15 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:28.928 11:51:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:28.928 11:51:15 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:28.928 11:51:15 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:28.928 11:51:15 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:28.928 11:51:15 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:28.928 11:51:15 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:28.928 11:51:15 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:28.928 11:51:15 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:28.928 11:51:16 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:28.928 11:51:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:28.928 11:51:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:28.928 11:51:16 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:28.928 11:51:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:28.928 11:51:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:28.928 11:51:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:28.929 11:51:16 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:28.929 11:51:16 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:28.929 11:51:16 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:28.929 11:51:16 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:28.929 11:51:16 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:28.929 11:51:16 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:29.188 11:51:16 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:29.447 [2024-07-25 11:51:16.537105] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:29.447 [2024-07-25 11:51:16.605233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:29.447 [2024-07-25 11:51:16.605236] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:29.447 [2024-07-25 11:51:16.645896] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:29.447 [2024-07-25 11:51:16.645939] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:32.734 11:51:19 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:32.734 11:51:19 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:32.734 spdk_app_start Round 1 00:05:32.735 11:51:19 event.app_repeat -- event/event.sh@25 -- # waitforlisten 152044 /var/tmp/spdk-nbd.sock 00:05:32.735 11:51:19 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 152044 ']' 00:05:32.735 11:51:19 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:32.735 11:51:19 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:32.735 11:51:19 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:32.735 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:32.735 11:51:19 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:32.735 11:51:19 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:32.735 11:51:19 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:32.735 11:51:19 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:32.735 11:51:19 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.735 Malloc0 00:05:32.735 11:51:19 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:32.735 Malloc1 00:05:32.735 11:51:19 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.735 11:51:19 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.735 11:51:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.735 11:51:19 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:32.735 11:51:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.735 11:51:19 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:32.735 11:51:19 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:32.735 11:51:19 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:32.735 11:51:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:32.735 11:51:19 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:32.735 11:51:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:32.735 11:51:19 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:32.735 11:51:19 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:32.735 11:51:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:32.735 11:51:19 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.735 11:51:19 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:32.994 /dev/nbd0 00:05:32.994 11:51:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:32.994 11:51:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:32.994 11:51:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:32.994 11:51:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:32.994 11:51:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:32.994 11:51:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:32.994 11:51:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:32.994 11:51:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:32.994 11:51:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:32.994 11:51:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:32.994 11:51:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:32.994 1+0 records in 00:05:32.994 1+0 records out 00:05:32.994 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000177572 s, 23.1 MB/s 00:05:32.994 11:51:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.994 11:51:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:32.994 11:51:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:32.994 11:51:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:32.994 11:51:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:32.994 11:51:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:32.994 11:51:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:32.994 11:51:20 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:33.254 /dev/nbd1 00:05:33.254 11:51:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:33.254 11:51:20 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:33.254 11:51:20 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:33.254 11:51:20 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:33.254 11:51:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:33.254 11:51:20 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:33.254 11:51:20 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:33.254 11:51:20 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:33.254 11:51:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:33.254 11:51:20 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:33.254 11:51:20 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:33.254 1+0 records in 00:05:33.254 1+0 records out 00:05:33.254 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000141644 s, 28.9 MB/s 00:05:33.254 11:51:20 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.254 11:51:20 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:33.254 11:51:20 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:33.254 11:51:20 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:33.254 11:51:20 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:33.254 11:51:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:33.254 11:51:20 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:33.254 11:51:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.254 11:51:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.254 11:51:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:33.254 11:51:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:33.254 { 00:05:33.254 "nbd_device": "/dev/nbd0", 00:05:33.254 "bdev_name": "Malloc0" 00:05:33.254 }, 00:05:33.254 { 00:05:33.254 "nbd_device": "/dev/nbd1", 00:05:33.254 "bdev_name": "Malloc1" 00:05:33.254 } 00:05:33.254 ]' 00:05:33.254 11:51:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:33.254 11:51:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:33.254 { 00:05:33.254 "nbd_device": "/dev/nbd0", 00:05:33.254 "bdev_name": "Malloc0" 00:05:33.254 }, 00:05:33.254 { 00:05:33.254 "nbd_device": "/dev/nbd1", 00:05:33.254 "bdev_name": "Malloc1" 00:05:33.254 } 00:05:33.254 ]' 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:33.514 /dev/nbd1' 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:33.514 /dev/nbd1' 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:33.514 256+0 records in 00:05:33.514 256+0 records out 00:05:33.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0103255 s, 102 MB/s 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:33.514 256+0 records in 00:05:33.514 256+0 records out 00:05:33.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0151167 s, 69.4 MB/s 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:33.514 256+0 records in 00:05:33.514 256+0 records out 00:05:33.514 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0152942 s, 68.6 MB/s 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.514 11:51:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.774 11:51:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:33.774 11:51:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.774 11:51:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.774 11:51:20 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:33.774 11:51:20 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:33.774 11:51:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:33.774 11:51:20 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:33.774 11:51:20 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:33.774 11:51:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:33.774 11:51:20 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:33.774 11:51:20 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:33.774 11:51:20 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:33.774 11:51:20 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:33.774 11:51:20 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:33.774 11:51:20 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:33.774 11:51:20 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:34.033 11:51:21 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:34.033 11:51:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:34.033 11:51:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:34.033 11:51:21 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:34.033 11:51:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:34.033 11:51:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:34.033 11:51:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:34.033 11:51:21 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:34.033 11:51:21 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:34.033 11:51:21 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:34.033 11:51:21 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:34.033 11:51:21 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:34.033 11:51:21 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:34.293 11:51:21 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:34.552 [2024-07-25 11:51:21.556306] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:34.552 [2024-07-25 11:51:21.623667] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:34.552 [2024-07-25 11:51:21.623682] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:34.552 [2024-07-25 11:51:21.665352] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:34.552 [2024-07-25 11:51:21.665392] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:37.840 11:51:24 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:37.840 11:51:24 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:37.840 spdk_app_start Round 2 00:05:37.840 11:51:24 event.app_repeat -- event/event.sh@25 -- # waitforlisten 152044 /var/tmp/spdk-nbd.sock 00:05:37.840 11:51:24 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 152044 ']' 00:05:37.840 11:51:24 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:37.840 11:51:24 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:37.840 11:51:24 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:37.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:37.840 11:51:24 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:37.840 11:51:24 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:37.840 11:51:24 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:37.840 11:51:24 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:37.840 11:51:24 event.app_repeat -- event/event.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.840 Malloc0 00:05:37.840 11:51:24 event.app_repeat -- event/event.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:37.840 Malloc1 00:05:37.840 11:51:24 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.840 11:51:24 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.840 11:51:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.840 11:51:24 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:37.840 11:51:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.840 11:51:24 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:37.840 11:51:24 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:37.840 11:51:24 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:37.840 11:51:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:37.840 11:51:24 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:37.840 11:51:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:37.840 11:51:24 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:37.840 11:51:24 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:37.840 11:51:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:37.840 11:51:24 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:37.840 11:51:24 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:37.840 /dev/nbd0 00:05:37.840 11:51:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:37.840 11:51:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:37.840 11:51:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd0 00:05:37.840 11:51:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:37.840 11:51:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:37.840 11:51:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:37.840 11:51:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd0 /proc/partitions 00:05:37.840 11:51:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:37.840 11:51:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:37.840 11:51:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:37.840 11:51:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd0 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.099 1+0 records in 00:05:38.099 1+0 records out 00:05:38.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000187152 s, 21.9 MB/s 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:38.099 11:51:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.099 11:51:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.099 11:51:25 event.app_repeat -- bdev/nbd_common.sh@15 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:38.099 /dev/nbd1 00:05:38.099 11:51:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:38.099 11:51:25 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@866 -- # local nbd_name=nbd1 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@867 -- # local i 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i = 1 )) 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@869 -- # (( i <= 20 )) 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@870 -- # grep -q -w nbd1 /proc/partitions 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@871 -- # break 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i = 1 )) 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@882 -- # (( i <= 20 )) 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@883 -- # dd if=/dev/nbd1 of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:38.099 1+0 records in 00:05:38.099 1+0 records out 00:05:38.099 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00018833 s, 21.7 MB/s 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@884 -- # stat -c %s /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@884 -- # size=4096 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@885 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdtest 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@886 -- # '[' 4096 '!=' 0 ']' 00:05:38.099 11:51:25 event.app_repeat -- common/autotest_common.sh@887 -- # return 0 00:05:38.099 11:51:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:38.099 11:51:25 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:38.099 11:51:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.099 11:51:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.099 11:51:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:38.358 { 00:05:38.358 "nbd_device": "/dev/nbd0", 00:05:38.358 "bdev_name": "Malloc0" 00:05:38.358 }, 00:05:38.358 { 00:05:38.358 "nbd_device": "/dev/nbd1", 00:05:38.358 "bdev_name": "Malloc1" 00:05:38.358 } 00:05:38.358 ]' 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:38.358 { 00:05:38.358 "nbd_device": "/dev/nbd0", 00:05:38.358 "bdev_name": "Malloc0" 00:05:38.358 }, 00:05:38.358 { 00:05:38.358 "nbd_device": "/dev/nbd1", 00:05:38.358 "bdev_name": "Malloc1" 00:05:38.358 } 00:05:38.358 ]' 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:38.358 /dev/nbd1' 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:38.358 /dev/nbd1' 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:38.358 256+0 records in 00:05:38.358 256+0 records out 00:05:38.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0100577 s, 104 MB/s 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:38.358 256+0 records in 00:05:38.358 256+0 records out 00:05:38.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0137713 s, 76.1 MB/s 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:38.358 256+0 records in 00:05:38.358 256+0 records out 00:05:38.358 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0150792 s, 69.5 MB/s 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd0 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest /dev/nbd1 00:05:38.358 11:51:25 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/nbdrandtest 00:05:38.633 11:51:25 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:38.633 11:51:25 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.633 11:51:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:38.633 11:51:25 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:38.633 11:51:25 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:38.633 11:51:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.633 11:51:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:38.633 11:51:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:38.633 11:51:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:38.633 11:51:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:38.633 11:51:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.633 11:51:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.633 11:51:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:38.633 11:51:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.633 11:51:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.633 11:51:25 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:38.633 11:51:25 event.app_repeat -- bdev/nbd_common.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:38.903 11:51:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:38.903 11:51:25 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:38.903 11:51:25 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:38.903 11:51:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:38.903 11:51:25 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:38.903 11:51:25 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:38.903 11:51:25 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:38.903 11:51:25 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:38.903 11:51:25 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:38.903 11:51:25 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:38.903 11:51:25 event.app_repeat -- bdev/nbd_common.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:38.903 11:51:26 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:39.161 11:51:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:39.161 11:51:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:39.161 11:51:26 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:39.161 11:51:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:39.161 11:51:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:39.161 11:51:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:39.161 11:51:26 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:39.161 11:51:26 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:39.161 11:51:26 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:39.161 11:51:26 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:39.161 11:51:26 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:39.161 11:51:26 event.app_repeat -- event/event.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:39.161 11:51:26 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:39.420 [2024-07-25 11:51:26.571325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:39.420 [2024-07-25 11:51:26.638081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:39.420 [2024-07-25 11:51:26.638084] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:39.679 [2024-07-25 11:51:26.679104] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:39.679 [2024-07-25 11:51:26.679139] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:42.211 11:51:29 event.app_repeat -- event/event.sh@38 -- # waitforlisten 152044 /var/tmp/spdk-nbd.sock 00:05:42.211 11:51:29 event.app_repeat -- common/autotest_common.sh@829 -- # '[' -z 152044 ']' 00:05:42.211 11:51:29 event.app_repeat -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:42.211 11:51:29 event.app_repeat -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.211 11:51:29 event.app_repeat -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:42.211 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:42.211 11:51:29 event.app_repeat -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.211 11:51:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.470 11:51:29 event.app_repeat -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:42.470 11:51:29 event.app_repeat -- common/autotest_common.sh@862 -- # return 0 00:05:42.470 11:51:29 event.app_repeat -- event/event.sh@39 -- # killprocess 152044 00:05:42.470 11:51:29 event.app_repeat -- common/autotest_common.sh@948 -- # '[' -z 152044 ']' 00:05:42.470 11:51:29 event.app_repeat -- common/autotest_common.sh@952 -- # kill -0 152044 00:05:42.470 11:51:29 event.app_repeat -- common/autotest_common.sh@953 -- # uname 00:05:42.470 11:51:29 event.app_repeat -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:42.470 11:51:29 event.app_repeat -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 152044 00:05:42.470 11:51:29 event.app_repeat -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:42.470 11:51:29 event.app_repeat -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:42.470 11:51:29 event.app_repeat -- common/autotest_common.sh@966 -- # echo 'killing process with pid 152044' 00:05:42.470 killing process with pid 152044 00:05:42.470 11:51:29 event.app_repeat -- common/autotest_common.sh@967 -- # kill 152044 00:05:42.470 11:51:29 event.app_repeat -- common/autotest_common.sh@972 -- # wait 152044 00:05:42.729 spdk_app_start is called in Round 0. 00:05:42.729 Shutdown signal received, stop current app iteration 00:05:42.729 Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 reinitialization... 00:05:42.729 spdk_app_start is called in Round 1. 00:05:42.729 Shutdown signal received, stop current app iteration 00:05:42.729 Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 reinitialization... 00:05:42.729 spdk_app_start is called in Round 2. 00:05:42.729 Shutdown signal received, stop current app iteration 00:05:42.729 Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 reinitialization... 00:05:42.729 spdk_app_start is called in Round 3. 00:05:42.729 Shutdown signal received, stop current app iteration 00:05:42.730 11:51:29 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:42.730 11:51:29 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:42.730 00:05:42.730 real 0m16.076s 00:05:42.730 user 0m34.913s 00:05:42.730 sys 0m2.317s 00:05:42.730 11:51:29 event.app_repeat -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:42.730 11:51:29 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:42.730 ************************************ 00:05:42.730 END TEST app_repeat 00:05:42.730 ************************************ 00:05:42.730 11:51:29 event -- common/autotest_common.sh@1142 -- # return 0 00:05:42.730 11:51:29 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:42.730 11:51:29 event -- event/event.sh@55 -- # run_test cpu_locks /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:42.730 11:51:29 event -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.730 11:51:29 event -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.730 11:51:29 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.730 ************************************ 00:05:42.730 START TEST cpu_locks 00:05:42.730 ************************************ 00:05:42.730 11:51:29 event.cpu_locks -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event/cpu_locks.sh 00:05:42.730 * Looking for test storage... 00:05:42.730 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/event 00:05:42.730 11:51:29 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:42.730 11:51:29 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:42.730 11:51:29 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:42.730 11:51:29 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:42.730 11:51:29 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:42.730 11:51:29 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:42.730 11:51:29 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.730 ************************************ 00:05:42.730 START TEST default_locks 00:05:42.730 ************************************ 00:05:42.730 11:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@1123 -- # default_locks 00:05:42.730 11:51:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=155026 00:05:42.730 11:51:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 155026 00:05:42.730 11:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 155026 ']' 00:05:42.730 11:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:42.730 11:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:42.730 11:51:29 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:42.730 11:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:42.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:42.730 11:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:42.730 11:51:29 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.989 [2024-07-25 11:51:30.003646] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:42.989 [2024-07-25 11:51:30.003697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155026 ] 00:05:42.989 EAL: No free 2048 kB hugepages reported on node 1 00:05:42.989 [2024-07-25 11:51:30.062337] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:42.989 [2024-07-25 11:51:30.146103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.557 11:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:43.557 11:51:30 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 0 00:05:43.557 11:51:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 155026 00:05:43.816 11:51:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 155026 00:05:43.816 11:51:30 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:44.075 lslocks: write error 00:05:44.075 11:51:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 155026 00:05:44.075 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@948 -- # '[' -z 155026 ']' 00:05:44.075 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@952 -- # kill -0 155026 00:05:44.075 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # uname 00:05:44.075 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:44.075 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 155026 00:05:44.075 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:44.075 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:44.075 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 155026' 00:05:44.075 killing process with pid 155026 00:05:44.075 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@967 -- # kill 155026 00:05:44.075 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # wait 155026 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 155026 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@648 -- # local es=0 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 155026 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # waitforlisten 155026 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@829 -- # '[' -z 155026 ']' 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.333 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.333 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (155026) - No such process 00:05:44.333 ERROR: process (pid: 155026) is no longer running 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@862 -- # return 1 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@651 -- # es=1 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:44.333 00:05:44.333 real 0m1.572s 00:05:44.333 user 0m1.655s 00:05:44.333 sys 0m0.511s 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:44.333 11:51:31 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.333 ************************************ 00:05:44.333 END TEST default_locks 00:05:44.333 ************************************ 00:05:44.333 11:51:31 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:44.333 11:51:31 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:44.333 11:51:31 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:44.333 11:51:31 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:44.333 11:51:31 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:44.592 ************************************ 00:05:44.592 START TEST default_locks_via_rpc 00:05:44.592 ************************************ 00:05:44.592 11:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1123 -- # default_locks_via_rpc 00:05:44.592 11:51:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=155293 00:05:44.592 11:51:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 155293 00:05:44.592 11:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 155293 ']' 00:05:44.592 11:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:44.592 11:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:44.592 11:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:44.592 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:44.592 11:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:44.592 11:51:31 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:44.592 11:51:31 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:44.592 [2024-07-25 11:51:31.645594] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:44.592 [2024-07-25 11:51:31.645636] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155293 ] 00:05:44.592 EAL: No free 2048 kB hugepages reported on node 1 00:05:44.592 [2024-07-25 11:51:31.697670] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.592 [2024-07-25 11:51:31.777263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 155293 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 155293 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 155293 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@948 -- # '[' -z 155293 ']' 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@952 -- # kill -0 155293 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # uname 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:45.528 11:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 155293 00:05:45.529 11:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:45.529 11:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:45.529 11:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 155293' 00:05:45.529 killing process with pid 155293 00:05:45.529 11:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@967 -- # kill 155293 00:05:45.529 11:51:32 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # wait 155293 00:05:45.787 00:05:45.787 real 0m1.407s 00:05:45.787 user 0m1.480s 00:05:45.787 sys 0m0.430s 00:05:45.787 11:51:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:45.787 11:51:33 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:45.787 ************************************ 00:05:45.787 END TEST default_locks_via_rpc 00:05:45.787 ************************************ 00:05:45.787 11:51:33 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:45.787 11:51:33 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:45.787 11:51:33 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:45.787 11:51:33 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:45.787 11:51:33 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:46.046 ************************************ 00:05:46.046 START TEST non_locking_app_on_locked_coremask 00:05:46.046 ************************************ 00:05:46.046 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # non_locking_app_on_locked_coremask 00:05:46.046 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=155557 00:05:46.046 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 155557 /var/tmp/spdk.sock 00:05:46.046 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 155557 ']' 00:05:46.046 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:46.046 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.046 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:46.046 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:46.046 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.046 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.046 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:46.046 [2024-07-25 11:51:33.103430] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:46.046 [2024-07-25 11:51:33.103470] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155557 ] 00:05:46.046 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.046 [2024-07-25 11:51:33.155445] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.046 [2024-07-25 11:51:33.234715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.979 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:46.979 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:46.979 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=155782 00:05:46.979 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 155782 /var/tmp/spdk2.sock 00:05:46.979 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 155782 ']' 00:05:46.979 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:46.979 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:46.979 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:46.979 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:46.979 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:46.979 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:46.979 11:51:33 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:46.979 [2024-07-25 11:51:33.932912] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:46.979 [2024-07-25 11:51:33.932959] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid155782 ] 00:05:46.979 EAL: No free 2048 kB hugepages reported on node 1 00:05:46.979 [2024-07-25 11:51:34.002343] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:46.980 [2024-07-25 11:51:34.002364] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.980 [2024-07-25 11:51:34.146487] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:47.547 11:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:47.547 11:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:47.547 11:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 155557 00:05:47.547 11:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 155557 00:05:47.547 11:51:34 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:47.805 lslocks: write error 00:05:47.805 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 155557 00:05:47.805 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 155557 ']' 00:05:47.805 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 155557 00:05:47.805 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:47.805 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:47.805 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 155557 00:05:47.805 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:47.805 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:47.805 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 155557' 00:05:47.805 killing process with pid 155557 00:05:47.805 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 155557 00:05:47.805 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 155557 00:05:48.741 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 155782 00:05:48.741 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 155782 ']' 00:05:48.741 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 155782 00:05:48.741 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:48.741 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:48.741 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 155782 00:05:48.741 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:48.741 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:48.741 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 155782' 00:05:48.741 killing process with pid 155782 00:05:48.741 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 155782 00:05:48.741 11:51:35 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 155782 00:05:49.000 00:05:49.000 real 0m2.953s 00:05:49.000 user 0m3.154s 00:05:49.000 sys 0m0.813s 00:05:49.000 11:51:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:49.000 11:51:36 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.000 ************************************ 00:05:49.000 END TEST non_locking_app_on_locked_coremask 00:05:49.000 ************************************ 00:05:49.000 11:51:36 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:49.000 11:51:36 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:49.000 11:51:36 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:49.000 11:51:36 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:49.000 11:51:36 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:49.000 ************************************ 00:05:49.000 START TEST locking_app_on_unlocked_coremask 00:05:49.000 ************************************ 00:05:49.000 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_unlocked_coremask 00:05:49.000 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=156059 00:05:49.000 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 156059 /var/tmp/spdk.sock 00:05:49.000 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:49.000 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 156059 ']' 00:05:49.000 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:49.000 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.000 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:49.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:49.000 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.000 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.000 [2024-07-25 11:51:36.127135] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:49.000 [2024-07-25 11:51:36.127185] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156059 ] 00:05:49.000 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.000 [2024-07-25 11:51:36.180275] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:49.000 [2024-07-25 11:51:36.180299] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.259 [2024-07-25 11:51:36.250596] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.825 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:49.825 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:49.825 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:49.825 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=156287 00:05:49.825 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 156287 /var/tmp/spdk2.sock 00:05:49.825 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@829 -- # '[' -z 156287 ']' 00:05:49.825 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:49.825 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:49.825 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:49.825 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:49.825 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:49.825 11:51:36 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:49.825 [2024-07-25 11:51:36.959941] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:49.825 [2024-07-25 11:51:36.959991] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156287 ] 00:05:49.826 EAL: No free 2048 kB hugepages reported on node 1 00:05:49.826 [2024-07-25 11:51:37.035705] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:50.085 [2024-07-25 11:51:37.194799] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:50.651 11:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:50.651 11:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:50.651 11:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 156287 00:05:50.651 11:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 156287 00:05:50.651 11:51:37 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:51.218 lslocks: write error 00:05:51.218 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 156059 00:05:51.218 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 156059 ']' 00:05:51.218 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 156059 00:05:51.218 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:51.218 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.218 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 156059 00:05:51.218 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.218 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.218 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 156059' 00:05:51.218 killing process with pid 156059 00:05:51.218 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 156059 00:05:51.218 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 156059 00:05:51.784 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 156287 00:05:51.784 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@948 -- # '[' -z 156287 ']' 00:05:51.784 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@952 -- # kill -0 156287 00:05:51.784 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:51.784 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:51.784 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 156287 00:05:51.784 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:51.784 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:51.784 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 156287' 00:05:51.784 killing process with pid 156287 00:05:51.784 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@967 -- # kill 156287 00:05:51.784 11:51:38 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # wait 156287 00:05:52.041 00:05:52.041 real 0m3.193s 00:05:52.041 user 0m3.427s 00:05:52.041 sys 0m0.884s 00:05:52.041 11:51:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:52.041 11:51:39 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.041 ************************************ 00:05:52.041 END TEST locking_app_on_unlocked_coremask 00:05:52.041 ************************************ 00:05:52.300 11:51:39 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:52.300 11:51:39 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:52.300 11:51:39 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:52.300 11:51:39 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:52.300 11:51:39 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:52.300 ************************************ 00:05:52.300 START TEST locking_app_on_locked_coremask 00:05:52.300 ************************************ 00:05:52.300 11:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1123 -- # locking_app_on_locked_coremask 00:05:52.300 11:51:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=156763 00:05:52.300 11:51:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 156763 /var/tmp/spdk.sock 00:05:52.300 11:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 156763 ']' 00:05:52.300 11:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.300 11:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:52.300 11:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.300 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.300 11:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:52.300 11:51:39 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:52.300 11:51:39 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 00:05:52.300 [2024-07-25 11:51:39.377798] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:52.300 [2024-07-25 11:51:39.377838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156763 ] 00:05:52.300 EAL: No free 2048 kB hugepages reported on node 1 00:05:52.300 [2024-07-25 11:51:39.431279] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:52.300 [2024-07-25 11:51:39.510338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:53.233 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.233 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:53.233 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=156795 00:05:53.233 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 156795 /var/tmp/spdk2.sock 00:05:53.233 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:53.233 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 156795 /var/tmp/spdk2.sock 00:05:53.233 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:53.233 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:53.233 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.233 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:53.233 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:53.233 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # waitforlisten 156795 /var/tmp/spdk2.sock 00:05:53.233 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@829 -- # '[' -z 156795 ']' 00:05:53.233 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:53.233 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:53.233 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:53.233 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:53.233 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:53.233 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:53.233 [2024-07-25 11:51:40.223765] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:53.233 [2024-07-25 11:51:40.223813] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid156795 ] 00:05:53.233 EAL: No free 2048 kB hugepages reported on node 1 00:05:53.233 [2024-07-25 11:51:40.300343] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 156763 has claimed it. 00:05:53.233 [2024-07-25 11:51:40.300378] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:53.800 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (156795) - No such process 00:05:53.800 ERROR: process (pid: 156795) is no longer running 00:05:53.800 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:53.800 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:53.800 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:53.800 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:53.800 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:53.800 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:53.800 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 156763 00:05:53.800 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 156763 00:05:53.800 11:51:40 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:54.058 lslocks: write error 00:05:54.059 11:51:41 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 156763 00:05:54.059 11:51:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@948 -- # '[' -z 156763 ']' 00:05:54.059 11:51:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@952 -- # kill -0 156763 00:05:54.059 11:51:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # uname 00:05:54.059 11:51:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:54.059 11:51:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 156763 00:05:54.059 11:51:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:54.059 11:51:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:54.059 11:51:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 156763' 00:05:54.059 killing process with pid 156763 00:05:54.059 11:51:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@967 -- # kill 156763 00:05:54.059 11:51:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # wait 156763 00:05:54.330 00:05:54.330 real 0m2.105s 00:05:54.330 user 0m2.346s 00:05:54.330 sys 0m0.530s 00:05:54.331 11:51:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:54.331 11:51:41 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.331 ************************************ 00:05:54.331 END TEST locking_app_on_locked_coremask 00:05:54.331 ************************************ 00:05:54.331 11:51:41 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:54.331 11:51:41 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:54.331 11:51:41 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:54.331 11:51:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:54.331 11:51:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:54.331 ************************************ 00:05:54.331 START TEST locking_overlapped_coremask 00:05:54.331 ************************************ 00:05:54.331 11:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask 00:05:54.331 11:51:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=157055 00:05:54.331 11:51:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 157055 /var/tmp/spdk.sock 00:05:54.331 11:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 157055 ']' 00:05:54.331 11:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:54.331 11:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:54.331 11:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:54.331 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:54.331 11:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:54.331 11:51:41 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:54.331 11:51:41 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 00:05:54.331 [2024-07-25 11:51:41.542792] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:54.331 [2024-07-25 11:51:41.542832] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157055 ] 00:05:54.331 EAL: No free 2048 kB hugepages reported on node 1 00:05:54.602 [2024-07-25 11:51:41.595871] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:54.602 [2024-07-25 11:51:41.676948] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:54.602 [2024-07-25 11:51:41.676966] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:54.602 [2024-07-25 11:51:41.676968] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:55.168 11:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.168 11:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 0 00:05:55.168 11:51:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=157288 00:05:55.168 11:51:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 157288 /var/tmp/spdk2.sock 00:05:55.168 11:51:42 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:55.168 11:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@648 -- # local es=0 00:05:55.168 11:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@650 -- # valid_exec_arg waitforlisten 157288 /var/tmp/spdk2.sock 00:05:55.168 11:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@636 -- # local arg=waitforlisten 00:05:55.168 11:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.168 11:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # type -t waitforlisten 00:05:55.168 11:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:55.168 11:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # waitforlisten 157288 /var/tmp/spdk2.sock 00:05:55.168 11:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@829 -- # '[' -z 157288 ']' 00:05:55.168 11:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:55.168 11:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:55.168 11:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:55.168 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:55.168 11:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:55.168 11:51:42 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:55.168 [2024-07-25 11:51:42.385431] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:55.168 [2024-07-25 11:51:42.385482] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157288 ] 00:05:55.168 EAL: No free 2048 kB hugepages reported on node 1 00:05:55.426 [2024-07-25 11:51:42.462814] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 157055 has claimed it. 00:05:55.426 [2024-07-25 11:51:42.462851] app.c: 902:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:55.992 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 844: kill: (157288) - No such process 00:05:55.992 ERROR: process (pid: 157288) is no longer running 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@862 -- # return 1 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@651 -- # es=1 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 157055 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@948 -- # '[' -z 157055 ']' 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@952 -- # kill -0 157055 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # uname 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 157055 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@966 -- # echo 'killing process with pid 157055' 00:05:55.992 killing process with pid 157055 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@967 -- # kill 157055 00:05:55.992 11:51:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # wait 157055 00:05:56.251 00:05:56.251 real 0m1.887s 00:05:56.251 user 0m5.321s 00:05:56.251 sys 0m0.409s 00:05:56.251 11:51:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:56.251 11:51:43 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:56.251 ************************************ 00:05:56.251 END TEST locking_overlapped_coremask 00:05:56.251 ************************************ 00:05:56.251 11:51:43 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:56.251 11:51:43 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:56.251 11:51:43 event.cpu_locks -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:56.251 11:51:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:56.251 11:51:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:56.251 ************************************ 00:05:56.251 START TEST locking_overlapped_coremask_via_rpc 00:05:56.251 ************************************ 00:05:56.251 11:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1123 -- # locking_overlapped_coremask_via_rpc 00:05:56.251 11:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=157517 00:05:56.251 11:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 157517 /var/tmp/spdk.sock 00:05:56.251 11:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:56.251 11:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 157517 ']' 00:05:56.251 11:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:56.251 11:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:56.251 11:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:56.251 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:56.251 11:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:56.251 11:51:43 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:56.251 [2024-07-25 11:51:43.496985] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:56.251 [2024-07-25 11:51:43.497026] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157517 ] 00:05:56.509 EAL: No free 2048 kB hugepages reported on node 1 00:05:56.509 [2024-07-25 11:51:43.549805] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:56.509 [2024-07-25 11:51:43.549827] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:56.509 [2024-07-25 11:51:43.630647] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:05:56.509 [2024-07-25 11:51:43.630745] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.509 [2024-07-25 11:51:43.630747] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.073 11:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:57.073 11:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:57.073 11:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=157564 00:05:57.073 11:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 157564 /var/tmp/spdk2.sock 00:05:57.073 11:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:57.074 11:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 157564 ']' 00:05:57.074 11:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:57.074 11:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:57.074 11:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:57.074 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:57.074 11:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:57.074 11:51:44 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:57.331 [2024-07-25 11:51:44.351205] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:57.331 [2024-07-25 11:51:44.351255] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid157564 ] 00:05:57.331 EAL: No free 2048 kB hugepages reported on node 1 00:05:57.331 [2024-07-25 11:51:44.426351] app.c: 906:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:57.331 [2024-07-25 11:51:44.426380] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:57.589 [2024-07-25 11:51:44.585133] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:05:57.589 [2024-07-25 11:51:44.585248] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:05:57.589 [2024-07-25 11:51:44.585249] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:05:58.154 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.154 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:58.154 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:58.154 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.154 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.154 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:05:58.154 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.154 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@648 -- # local es=0 00:05:58.154 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.154 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:05:58.154 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.154 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:05:58.154 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:05:58.154 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:58.154 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:05:58.154 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.154 [2024-07-25 11:51:45.188117] app.c: 771:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 157517 has claimed it. 00:05:58.154 request: 00:05:58.154 { 00:05:58.154 "method": "framework_enable_cpumask_locks", 00:05:58.154 "req_id": 1 00:05:58.154 } 00:05:58.154 Got JSON-RPC error response 00:05:58.154 response: 00:05:58.154 { 00:05:58.154 "code": -32603, 00:05:58.154 "message": "Failed to claim CPU core: 2" 00:05:58.154 } 00:05:58.154 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@651 -- # es=1 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 157517 /var/tmp/spdk.sock 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 157517 ']' 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:58.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 157564 /var/tmp/spdk2.sock 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@829 -- # '[' -z 157564 ']' 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:58.155 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:05:58.155 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.412 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:05:58.412 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@862 -- # return 0 00:05:58.412 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:58.412 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:58.412 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:58.412 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:58.412 00:05:58.412 real 0m2.116s 00:05:58.412 user 0m0.885s 00:05:58.412 sys 0m0.161s 00:05:58.412 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:58.412 11:51:45 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:58.412 ************************************ 00:05:58.412 END TEST locking_overlapped_coremask_via_rpc 00:05:58.412 ************************************ 00:05:58.412 11:51:45 event.cpu_locks -- common/autotest_common.sh@1142 -- # return 0 00:05:58.412 11:51:45 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:58.412 11:51:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 157517 ]] 00:05:58.412 11:51:45 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 157517 00:05:58.412 11:51:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 157517 ']' 00:05:58.412 11:51:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 157517 00:05:58.412 11:51:45 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:58.412 11:51:45 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.412 11:51:45 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 157517 00:05:58.412 11:51:45 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:05:58.412 11:51:45 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:05:58.412 11:51:45 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 157517' 00:05:58.412 killing process with pid 157517 00:05:58.412 11:51:45 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 157517 00:05:58.412 11:51:45 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 157517 00:05:58.980 11:51:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 157564 ]] 00:05:58.980 11:51:45 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 157564 00:05:58.980 11:51:45 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 157564 ']' 00:05:58.980 11:51:45 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 157564 00:05:58.980 11:51:45 event.cpu_locks -- common/autotest_common.sh@953 -- # uname 00:05:58.980 11:51:45 event.cpu_locks -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:05:58.980 11:51:45 event.cpu_locks -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 157564 00:05:58.980 11:51:46 event.cpu_locks -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:05:58.980 11:51:46 event.cpu_locks -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:05:58.980 11:51:46 event.cpu_locks -- common/autotest_common.sh@966 -- # echo 'killing process with pid 157564' 00:05:58.980 killing process with pid 157564 00:05:58.980 11:51:46 event.cpu_locks -- common/autotest_common.sh@967 -- # kill 157564 00:05:58.980 11:51:46 event.cpu_locks -- common/autotest_common.sh@972 -- # wait 157564 00:05:59.239 11:51:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:59.239 11:51:46 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:59.239 11:51:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 157517 ]] 00:05:59.239 11:51:46 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 157517 00:05:59.239 11:51:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 157517 ']' 00:05:59.239 11:51:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 157517 00:05:59.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (157517) - No such process 00:05:59.239 11:51:46 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 157517 is not found' 00:05:59.239 Process with pid 157517 is not found 00:05:59.239 11:51:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 157564 ]] 00:05:59.239 11:51:46 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 157564 00:05:59.239 11:51:46 event.cpu_locks -- common/autotest_common.sh@948 -- # '[' -z 157564 ']' 00:05:59.239 11:51:46 event.cpu_locks -- common/autotest_common.sh@952 -- # kill -0 157564 00:05:59.239 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (157564) - No such process 00:05:59.239 11:51:46 event.cpu_locks -- common/autotest_common.sh@975 -- # echo 'Process with pid 157564 is not found' 00:05:59.239 Process with pid 157564 is not found 00:05:59.239 11:51:46 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:59.239 00:05:59.239 real 0m16.476s 00:05:59.239 user 0m28.818s 00:05:59.239 sys 0m4.604s 00:05:59.239 11:51:46 event.cpu_locks -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.239 11:51:46 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:59.239 ************************************ 00:05:59.239 END TEST cpu_locks 00:05:59.239 ************************************ 00:05:59.239 11:51:46 event -- common/autotest_common.sh@1142 -- # return 0 00:05:59.239 00:05:59.239 real 0m41.052s 00:05:59.239 user 1m18.682s 00:05:59.239 sys 0m7.812s 00:05:59.239 11:51:46 event -- common/autotest_common.sh@1124 -- # xtrace_disable 00:05:59.239 11:51:46 event -- common/autotest_common.sh@10 -- # set +x 00:05:59.239 ************************************ 00:05:59.239 END TEST event 00:05:59.239 ************************************ 00:05:59.239 11:51:46 -- common/autotest_common.sh@1142 -- # return 0 00:05:59.239 11:51:46 -- spdk/autotest.sh@182 -- # run_test thread /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:59.239 11:51:46 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:05:59.239 11:51:46 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.239 11:51:46 -- common/autotest_common.sh@10 -- # set +x 00:05:59.239 ************************************ 00:05:59.240 START TEST thread 00:05:59.240 ************************************ 00:05:59.240 11:51:46 thread -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/thread.sh 00:05:59.499 * Looking for test storage... 00:05:59.499 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread 00:05:59.499 11:51:46 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:59.499 11:51:46 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:05:59.499 11:51:46 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:05:59.499 11:51:46 thread -- common/autotest_common.sh@10 -- # set +x 00:05:59.499 ************************************ 00:05:59.499 START TEST thread_poller_perf 00:05:59.499 ************************************ 00:05:59.499 11:51:46 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:59.499 [2024-07-25 11:51:46.556190] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:05:59.499 [2024-07-25 11:51:46.556257] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158109 ] 00:05:59.499 EAL: No free 2048 kB hugepages reported on node 1 00:05:59.499 [2024-07-25 11:51:46.613330] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:59.499 [2024-07-25 11:51:46.689207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:05:59.499 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:06:00.878 ====================================== 00:06:00.878 busy:2305451012 (cyc) 00:06:00.878 total_run_count: 409000 00:06:00.878 tsc_hz: 2300000000 (cyc) 00:06:00.878 ====================================== 00:06:00.878 poller_cost: 5636 (cyc), 2450 (nsec) 00:06:00.878 00:06:00.878 real 0m1.228s 00:06:00.878 user 0m1.143s 00:06:00.878 sys 0m0.080s 00:06:00.878 11:51:47 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:00.878 11:51:47 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 ************************************ 00:06:00.878 END TEST thread_poller_perf 00:06:00.878 ************************************ 00:06:00.878 11:51:47 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:00.878 11:51:47 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:00.878 11:51:47 thread -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:00.878 11:51:47 thread -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:00.878 11:51:47 thread -- common/autotest_common.sh@10 -- # set +x 00:06:00.878 ************************************ 00:06:00.878 START TEST thread_poller_perf 00:06:00.878 ************************************ 00:06:00.878 11:51:47 thread.thread_poller_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:06:00.878 [2024-07-25 11:51:47.850884] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:00.878 [2024-07-25 11:51:47.850954] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158360 ] 00:06:00.878 EAL: No free 2048 kB hugepages reported on node 1 00:06:00.878 [2024-07-25 11:51:47.910549] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:00.878 [2024-07-25 11:51:47.980720] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:00.878 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:06:01.817 ====================================== 00:06:01.817 busy:2301537638 (cyc) 00:06:01.817 total_run_count: 5444000 00:06:01.817 tsc_hz: 2300000000 (cyc) 00:06:01.817 ====================================== 00:06:01.817 poller_cost: 422 (cyc), 183 (nsec) 00:06:01.817 00:06:01.817 real 0m1.221s 00:06:01.817 user 0m1.142s 00:06:01.817 sys 0m0.073s 00:06:01.817 11:51:49 thread.thread_poller_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:01.817 11:51:49 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:06:01.817 ************************************ 00:06:01.817 END TEST thread_poller_perf 00:06:01.817 ************************************ 00:06:02.077 11:51:49 thread -- common/autotest_common.sh@1142 -- # return 0 00:06:02.077 11:51:49 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:06:02.077 00:06:02.077 real 0m2.663s 00:06:02.077 user 0m2.372s 00:06:02.077 sys 0m0.299s 00:06:02.077 11:51:49 thread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:02.077 11:51:49 thread -- common/autotest_common.sh@10 -- # set +x 00:06:02.077 ************************************ 00:06:02.077 END TEST thread 00:06:02.077 ************************************ 00:06:02.077 11:51:49 -- common/autotest_common.sh@1142 -- # return 0 00:06:02.077 11:51:49 -- spdk/autotest.sh@183 -- # run_test accel /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:02.077 11:51:49 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:02.077 11:51:49 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:02.077 11:51:49 -- common/autotest_common.sh@10 -- # set +x 00:06:02.077 ************************************ 00:06:02.077 START TEST accel 00:06:02.077 ************************************ 00:06:02.077 11:51:49 accel -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel.sh 00:06:02.077 * Looking for test storage... 00:06:02.077 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:02.077 11:51:49 accel -- accel/accel.sh@81 -- # declare -A expected_opcs 00:06:02.077 11:51:49 accel -- accel/accel.sh@82 -- # get_expected_opcs 00:06:02.077 11:51:49 accel -- accel/accel.sh@60 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:02.077 11:51:49 accel -- accel/accel.sh@62 -- # spdk_tgt_pid=158649 00:06:02.077 11:51:49 accel -- accel/accel.sh@63 -- # waitforlisten 158649 00:06:02.077 11:51:49 accel -- common/autotest_common.sh@829 -- # '[' -z 158649 ']' 00:06:02.077 11:51:49 accel -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:02.077 11:51:49 accel -- accel/accel.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -c /dev/fd/63 00:06:02.077 11:51:49 accel -- accel/accel.sh@61 -- # build_accel_config 00:06:02.077 11:51:49 accel -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:02.077 11:51:49 accel -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:02.077 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:02.077 11:51:49 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:02.077 11:51:49 accel -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:02.077 11:51:49 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:02.077 11:51:49 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.077 11:51:49 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:02.077 11:51:49 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:02.077 11:51:49 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:02.077 11:51:49 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:02.077 11:51:49 accel -- accel/accel.sh@41 -- # jq -r . 00:06:02.077 [2024-07-25 11:51:49.289707] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:02.077 [2024-07-25 11:51:49.289752] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158649 ] 00:06:02.077 EAL: No free 2048 kB hugepages reported on node 1 00:06:02.336 [2024-07-25 11:51:49.344647] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:02.336 [2024-07-25 11:51:49.421869] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:02.905 11:51:50 accel -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:02.905 11:51:50 accel -- common/autotest_common.sh@862 -- # return 0 00:06:02.905 11:51:50 accel -- accel/accel.sh@65 -- # [[ 0 -gt 0 ]] 00:06:02.905 11:51:50 accel -- accel/accel.sh@66 -- # [[ 0 -gt 0 ]] 00:06:02.905 11:51:50 accel -- accel/accel.sh@67 -- # [[ 0 -gt 0 ]] 00:06:02.905 11:51:50 accel -- accel/accel.sh@68 -- # [[ -n '' ]] 00:06:02.905 11:51:50 accel -- accel/accel.sh@70 -- # exp_opcs=($($rpc_py accel_get_opc_assignments | jq -r ". | to_entries | map(\"\(.key)=\(.value)\") | .[]")) 00:06:02.905 11:51:50 accel -- accel/accel.sh@70 -- # rpc_cmd accel_get_opc_assignments 00:06:02.905 11:51:50 accel -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:02.905 11:51:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:02.905 11:51:50 accel -- accel/accel.sh@70 -- # jq -r '. | to_entries | map("\(.key)=\(.value)") | .[]' 00:06:02.905 11:51:50 accel -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:02.905 11:51:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.905 11:51:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.905 11:51:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.905 11:51:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.905 11:51:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.905 11:51:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.905 11:51:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.905 11:51:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.905 11:51:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.905 11:51:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.905 11:51:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.905 11:51:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.905 11:51:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.905 11:51:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.905 11:51:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.905 11:51:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.905 11:51:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.905 11:51:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.905 11:51:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.905 11:51:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.905 11:51:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.905 11:51:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.905 11:51:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.905 11:51:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.905 11:51:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.905 11:51:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.905 11:51:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.905 11:51:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.905 11:51:50 accel -- accel/accel.sh@71 -- # for opc_opt in "${exp_opcs[@]}" 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # IFS== 00:06:02.905 11:51:50 accel -- accel/accel.sh@72 -- # read -r opc module 00:06:02.905 11:51:50 accel -- accel/accel.sh@73 -- # expected_opcs["$opc"]=software 00:06:02.905 11:51:50 accel -- accel/accel.sh@75 -- # killprocess 158649 00:06:02.905 11:51:50 accel -- common/autotest_common.sh@948 -- # '[' -z 158649 ']' 00:06:02.905 11:51:50 accel -- common/autotest_common.sh@952 -- # kill -0 158649 00:06:02.905 11:51:50 accel -- common/autotest_common.sh@953 -- # uname 00:06:02.905 11:51:50 accel -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:02.905 11:51:50 accel -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 158649 00:06:03.165 11:51:50 accel -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:03.165 11:51:50 accel -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:03.165 11:51:50 accel -- common/autotest_common.sh@966 -- # echo 'killing process with pid 158649' 00:06:03.165 killing process with pid 158649 00:06:03.165 11:51:50 accel -- common/autotest_common.sh@967 -- # kill 158649 00:06:03.165 11:51:50 accel -- common/autotest_common.sh@972 -- # wait 158649 00:06:03.424 11:51:50 accel -- accel/accel.sh@76 -- # trap - ERR 00:06:03.424 11:51:50 accel -- accel/accel.sh@89 -- # run_test accel_help accel_perf -h 00:06:03.424 11:51:50 accel -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:03.424 11:51:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.424 11:51:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.424 11:51:50 accel.accel_help -- common/autotest_common.sh@1123 -- # accel_perf -h 00:06:03.424 11:51:50 accel.accel_help -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -h 00:06:03.424 11:51:50 accel.accel_help -- accel/accel.sh@12 -- # build_accel_config 00:06:03.424 11:51:50 accel.accel_help -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.424 11:51:50 accel.accel_help -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.424 11:51:50 accel.accel_help -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.424 11:51:50 accel.accel_help -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.424 11:51:50 accel.accel_help -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.424 11:51:50 accel.accel_help -- accel/accel.sh@40 -- # local IFS=, 00:06:03.424 11:51:50 accel.accel_help -- accel/accel.sh@41 -- # jq -r . 00:06:03.424 11:51:50 accel.accel_help -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.424 11:51:50 accel.accel_help -- common/autotest_common.sh@10 -- # set +x 00:06:03.424 11:51:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.424 11:51:50 accel -- accel/accel.sh@91 -- # run_test accel_missing_filename NOT accel_perf -t 1 -w compress 00:06:03.424 11:51:50 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:03.424 11:51:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.424 11:51:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.424 ************************************ 00:06:03.424 START TEST accel_missing_filename 00:06:03.424 ************************************ 00:06:03.424 11:51:50 accel.accel_missing_filename -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress 00:06:03.424 11:51:50 accel.accel_missing_filename -- common/autotest_common.sh@648 -- # local es=0 00:06:03.424 11:51:50 accel.accel_missing_filename -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress 00:06:03.424 11:51:50 accel.accel_missing_filename -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:03.424 11:51:50 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.424 11:51:50 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:03.424 11:51:50 accel.accel_missing_filename -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.424 11:51:50 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress 00:06:03.424 11:51:50 accel.accel_missing_filename -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress 00:06:03.424 11:51:50 accel.accel_missing_filename -- accel/accel.sh@12 -- # build_accel_config 00:06:03.424 11:51:50 accel.accel_missing_filename -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.424 11:51:50 accel.accel_missing_filename -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.424 11:51:50 accel.accel_missing_filename -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.424 11:51:50 accel.accel_missing_filename -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.424 11:51:50 accel.accel_missing_filename -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.424 11:51:50 accel.accel_missing_filename -- accel/accel.sh@40 -- # local IFS=, 00:06:03.424 11:51:50 accel.accel_missing_filename -- accel/accel.sh@41 -- # jq -r . 00:06:03.424 [2024-07-25 11:51:50.655429] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:03.424 [2024-07-25 11:51:50.655495] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158921 ] 00:06:03.684 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.684 [2024-07-25 11:51:50.711613] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.684 [2024-07-25 11:51:50.786197] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:03.684 [2024-07-25 11:51:50.827571] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:03.684 [2024-07-25 11:51:50.887061] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:03.944 A filename is required. 00:06:03.944 11:51:50 accel.accel_missing_filename -- common/autotest_common.sh@651 -- # es=234 00:06:03.944 11:51:50 accel.accel_missing_filename -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:03.944 11:51:50 accel.accel_missing_filename -- common/autotest_common.sh@660 -- # es=106 00:06:03.944 11:51:50 accel.accel_missing_filename -- common/autotest_common.sh@661 -- # case "$es" in 00:06:03.944 11:51:50 accel.accel_missing_filename -- common/autotest_common.sh@668 -- # es=1 00:06:03.944 11:51:50 accel.accel_missing_filename -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:03.944 00:06:03.944 real 0m0.333s 00:06:03.944 user 0m0.252s 00:06:03.944 sys 0m0.122s 00:06:03.944 11:51:50 accel.accel_missing_filename -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:03.944 11:51:50 accel.accel_missing_filename -- common/autotest_common.sh@10 -- # set +x 00:06:03.944 ************************************ 00:06:03.944 END TEST accel_missing_filename 00:06:03.944 ************************************ 00:06:03.944 11:51:50 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:03.944 11:51:50 accel -- accel/accel.sh@93 -- # run_test accel_compress_verify NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:03.944 11:51:50 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:03.944 11:51:50 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:03.944 11:51:50 accel -- common/autotest_common.sh@10 -- # set +x 00:06:03.944 ************************************ 00:06:03.944 START TEST accel_compress_verify 00:06:03.944 ************************************ 00:06:03.944 11:51:51 accel.accel_compress_verify -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:03.944 11:51:51 accel.accel_compress_verify -- common/autotest_common.sh@648 -- # local es=0 00:06:03.944 11:51:51 accel.accel_compress_verify -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:03.944 11:51:51 accel.accel_compress_verify -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:03.944 11:51:51 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.944 11:51:51 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:03.944 11:51:51 accel.accel_compress_verify -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:03.944 11:51:51 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:03.944 11:51:51 accel.accel_compress_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:03.944 11:51:51 accel.accel_compress_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:03.944 11:51:51 accel.accel_compress_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:03.944 11:51:51 accel.accel_compress_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:03.944 11:51:51 accel.accel_compress_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:03.944 11:51:51 accel.accel_compress_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:03.944 11:51:51 accel.accel_compress_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:03.944 11:51:51 accel.accel_compress_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:03.944 11:51:51 accel.accel_compress_verify -- accel/accel.sh@41 -- # jq -r . 00:06:03.944 [2024-07-25 11:51:51.035087] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:03.944 [2024-07-25 11:51:51.035142] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid158945 ] 00:06:03.944 EAL: No free 2048 kB hugepages reported on node 1 00:06:03.944 [2024-07-25 11:51:51.088522] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:03.944 [2024-07-25 11:51:51.161366] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.204 [2024-07-25 11:51:51.202332] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:04.204 [2024-07-25 11:51:51.261850] accel_perf.c:1463:main: *ERROR*: ERROR starting application 00:06:04.204 00:06:04.204 Compression does not support the verify option, aborting. 00:06:04.204 11:51:51 accel.accel_compress_verify -- common/autotest_common.sh@651 -- # es=161 00:06:04.204 11:51:51 accel.accel_compress_verify -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:04.204 11:51:51 accel.accel_compress_verify -- common/autotest_common.sh@660 -- # es=33 00:06:04.204 11:51:51 accel.accel_compress_verify -- common/autotest_common.sh@661 -- # case "$es" in 00:06:04.204 11:51:51 accel.accel_compress_verify -- common/autotest_common.sh@668 -- # es=1 00:06:04.204 11:51:51 accel.accel_compress_verify -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:04.204 00:06:04.204 real 0m0.327s 00:06:04.204 user 0m0.248s 00:06:04.204 sys 0m0.115s 00:06:04.204 11:51:51 accel.accel_compress_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.204 11:51:51 accel.accel_compress_verify -- common/autotest_common.sh@10 -- # set +x 00:06:04.204 ************************************ 00:06:04.204 END TEST accel_compress_verify 00:06:04.204 ************************************ 00:06:04.204 11:51:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.204 11:51:51 accel -- accel/accel.sh@95 -- # run_test accel_wrong_workload NOT accel_perf -t 1 -w foobar 00:06:04.204 11:51:51 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:04.204 11:51:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.204 11:51:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.204 ************************************ 00:06:04.204 START TEST accel_wrong_workload 00:06:04.204 ************************************ 00:06:04.204 11:51:51 accel.accel_wrong_workload -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w foobar 00:06:04.204 11:51:51 accel.accel_wrong_workload -- common/autotest_common.sh@648 -- # local es=0 00:06:04.204 11:51:51 accel.accel_wrong_workload -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w foobar 00:06:04.204 11:51:51 accel.accel_wrong_workload -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:04.204 11:51:51 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.204 11:51:51 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:04.204 11:51:51 accel.accel_wrong_workload -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.204 11:51:51 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w foobar 00:06:04.204 11:51:51 accel.accel_wrong_workload -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w foobar 00:06:04.204 11:51:51 accel.accel_wrong_workload -- accel/accel.sh@12 -- # build_accel_config 00:06:04.204 11:51:51 accel.accel_wrong_workload -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.204 11:51:51 accel.accel_wrong_workload -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.204 11:51:51 accel.accel_wrong_workload -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.204 11:51:51 accel.accel_wrong_workload -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.204 11:51:51 accel.accel_wrong_workload -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.204 11:51:51 accel.accel_wrong_workload -- accel/accel.sh@40 -- # local IFS=, 00:06:04.204 11:51:51 accel.accel_wrong_workload -- accel/accel.sh@41 -- # jq -r . 00:06:04.204 Unsupported workload type: foobar 00:06:04.204 [2024-07-25 11:51:51.406388] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'w' failed: 1 00:06:04.204 accel_perf options: 00:06:04.204 [-h help message] 00:06:04.204 [-q queue depth per core] 00:06:04.204 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:04.204 [-T number of threads per core 00:06:04.204 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:04.204 [-t time in seconds] 00:06:04.204 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:04.204 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:04.204 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:04.204 [-l for compress/decompress workloads, name of uncompressed input file 00:06:04.204 [-S for crc32c workload, use this seed value (default 0) 00:06:04.204 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:04.204 [-f for fill workload, use this BYTE value (default 255) 00:06:04.204 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:04.204 [-y verify result if this switch is on] 00:06:04.204 [-a tasks to allocate per core (default: same value as -q)] 00:06:04.204 Can be used to spread operations across a wider range of memory. 00:06:04.204 11:51:51 accel.accel_wrong_workload -- common/autotest_common.sh@651 -- # es=1 00:06:04.204 11:51:51 accel.accel_wrong_workload -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:04.204 11:51:51 accel.accel_wrong_workload -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:04.204 11:51:51 accel.accel_wrong_workload -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:04.205 00:06:04.205 real 0m0.027s 00:06:04.205 user 0m0.015s 00:06:04.205 sys 0m0.012s 00:06:04.205 11:51:51 accel.accel_wrong_workload -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.205 11:51:51 accel.accel_wrong_workload -- common/autotest_common.sh@10 -- # set +x 00:06:04.205 ************************************ 00:06:04.205 END TEST accel_wrong_workload 00:06:04.205 ************************************ 00:06:04.205 Error: writing output failed: Broken pipe 00:06:04.205 11:51:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.205 11:51:51 accel -- accel/accel.sh@97 -- # run_test accel_negative_buffers NOT accel_perf -t 1 -w xor -y -x -1 00:06:04.205 11:51:51 accel -- common/autotest_common.sh@1099 -- # '[' 10 -le 1 ']' 00:06:04.205 11:51:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.205 11:51:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.464 ************************************ 00:06:04.464 START TEST accel_negative_buffers 00:06:04.464 ************************************ 00:06:04.464 11:51:51 accel.accel_negative_buffers -- common/autotest_common.sh@1123 -- # NOT accel_perf -t 1 -w xor -y -x -1 00:06:04.464 11:51:51 accel.accel_negative_buffers -- common/autotest_common.sh@648 -- # local es=0 00:06:04.464 11:51:51 accel.accel_negative_buffers -- common/autotest_common.sh@650 -- # valid_exec_arg accel_perf -t 1 -w xor -y -x -1 00:06:04.464 11:51:51 accel.accel_negative_buffers -- common/autotest_common.sh@636 -- # local arg=accel_perf 00:06:04.464 11:51:51 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.464 11:51:51 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # type -t accel_perf 00:06:04.464 11:51:51 accel.accel_negative_buffers -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:04.464 11:51:51 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # accel_perf -t 1 -w xor -y -x -1 00:06:04.464 11:51:51 accel.accel_negative_buffers -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x -1 00:06:04.464 11:51:51 accel.accel_negative_buffers -- accel/accel.sh@12 -- # build_accel_config 00:06:04.464 11:51:51 accel.accel_negative_buffers -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.464 11:51:51 accel.accel_negative_buffers -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.464 11:51:51 accel.accel_negative_buffers -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.464 11:51:51 accel.accel_negative_buffers -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.464 11:51:51 accel.accel_negative_buffers -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.464 11:51:51 accel.accel_negative_buffers -- accel/accel.sh@40 -- # local IFS=, 00:06:04.464 11:51:51 accel.accel_negative_buffers -- accel/accel.sh@41 -- # jq -r . 00:06:04.465 -x option must be non-negative. 00:06:04.465 [2024-07-25 11:51:51.500883] app.c:1451:spdk_app_parse_args: *ERROR*: Parsing app-specific command line parameter 'x' failed: 1 00:06:04.465 accel_perf options: 00:06:04.465 [-h help message] 00:06:04.465 [-q queue depth per core] 00:06:04.465 [-C for supported workloads, use this value to configure the io vector size to test (default 1) 00:06:04.465 [-T number of threads per core 00:06:04.465 [-o transfer size in bytes (default: 4KiB. For compress/decompress, 0 means the input file size)] 00:06:04.465 [-t time in seconds] 00:06:04.465 [-w workload type must be one of these: copy, fill, crc32c, copy_crc32c, compare, compress, decompress, dualcast, xor, 00:06:04.465 [ dif_verify, dif_verify_copy, dif_generate, dif_generate_copy 00:06:04.465 [-M assign module to the operation, not compatible with accel_assign_opc RPC 00:06:04.465 [-l for compress/decompress workloads, name of uncompressed input file 00:06:04.465 [-S for crc32c workload, use this seed value (default 0) 00:06:04.465 [-P for compare workload, percentage of operations that should miscompare (percent, default 0) 00:06:04.465 [-f for fill workload, use this BYTE value (default 255) 00:06:04.465 [-x for xor workload, use this number of source buffers (default, minimum: 2)] 00:06:04.465 [-y verify result if this switch is on] 00:06:04.465 [-a tasks to allocate per core (default: same value as -q)] 00:06:04.465 Can be used to spread operations across a wider range of memory. 00:06:04.465 11:51:51 accel.accel_negative_buffers -- common/autotest_common.sh@651 -- # es=1 00:06:04.465 11:51:51 accel.accel_negative_buffers -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:04.465 11:51:51 accel.accel_negative_buffers -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:04.465 11:51:51 accel.accel_negative_buffers -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:04.465 00:06:04.465 real 0m0.034s 00:06:04.465 user 0m0.020s 00:06:04.465 sys 0m0.014s 00:06:04.465 11:51:51 accel.accel_negative_buffers -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:04.465 11:51:51 accel.accel_negative_buffers -- common/autotest_common.sh@10 -- # set +x 00:06:04.465 ************************************ 00:06:04.465 END TEST accel_negative_buffers 00:06:04.465 ************************************ 00:06:04.465 Error: writing output failed: Broken pipe 00:06:04.465 11:51:51 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:04.465 11:51:51 accel -- accel/accel.sh@101 -- # run_test accel_crc32c accel_test -t 1 -w crc32c -S 32 -y 00:06:04.465 11:51:51 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:04.465 11:51:51 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:04.465 11:51:51 accel -- common/autotest_common.sh@10 -- # set +x 00:06:04.465 ************************************ 00:06:04.465 START TEST accel_crc32c 00:06:04.465 ************************************ 00:06:04.465 11:51:51 accel.accel_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -S 32 -y 00:06:04.465 11:51:51 accel.accel_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:04.465 11:51:51 accel.accel_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:04.465 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.465 11:51:51 accel.accel_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -S 32 -y 00:06:04.465 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.465 11:51:51 accel.accel_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -S 32 -y 00:06:04.465 11:51:51 accel.accel_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:04.465 11:51:51 accel.accel_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:04.465 11:51:51 accel.accel_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:04.465 11:51:51 accel.accel_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:04.465 11:51:51 accel.accel_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:04.465 11:51:51 accel.accel_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:04.465 11:51:51 accel.accel_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:04.465 11:51:51 accel.accel_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:04.465 [2024-07-25 11:51:51.584704] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:04.465 [2024-07-25 11:51:51.584760] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159095 ] 00:06:04.465 EAL: No free 2048 kB hugepages reported on node 1 00:06:04.465 [2024-07-25 11:51:51.640754] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:04.725 [2024-07-25 11:51:51.715233] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=crc32c 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=software 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=32 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=1 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:04.725 11:51:51 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@20 -- # val= 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:05.664 11:51:52 accel.accel_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:05.664 00:06:05.664 real 0m1.331s 00:06:05.664 user 0m1.231s 00:06:05.664 sys 0m0.115s 00:06:05.664 11:51:52 accel.accel_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:05.664 11:51:52 accel.accel_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:05.664 ************************************ 00:06:05.664 END TEST accel_crc32c 00:06:05.664 ************************************ 00:06:05.924 11:51:52 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:05.924 11:51:52 accel -- accel/accel.sh@102 -- # run_test accel_crc32c_C2 accel_test -t 1 -w crc32c -y -C 2 00:06:05.924 11:51:52 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:05.924 11:51:52 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:05.924 11:51:52 accel -- common/autotest_common.sh@10 -- # set +x 00:06:05.924 ************************************ 00:06:05.924 START TEST accel_crc32c_C2 00:06:05.924 ************************************ 00:06:05.924 11:51:52 accel.accel_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w crc32c -y -C 2 00:06:05.924 11:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:05.924 11:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:05.924 11:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.924 11:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.924 11:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w crc32c -y -C 2 00:06:05.924 11:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w crc32c -y -C 2 00:06:05.924 11:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:05.924 11:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:05.924 11:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:05.924 11:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:05.924 11:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:05.924 11:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:05.924 11:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:05.924 11:51:52 accel.accel_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:05.924 [2024-07-25 11:51:52.983552] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:05.924 [2024-07-25 11:51:52.983620] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159354 ] 00:06:05.924 EAL: No free 2048 kB hugepages reported on node 1 00:06:05.924 [2024-07-25 11:51:53.037983] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:05.924 [2024-07-25 11:51:53.111002] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=crc32c 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=crc32c 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:05.924 11:51:53 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n crc32c ]] 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:07.307 00:06:07.307 real 0m1.335s 00:06:07.307 user 0m1.239s 00:06:07.307 sys 0m0.111s 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:07.307 11:51:54 accel.accel_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:07.307 ************************************ 00:06:07.307 END TEST accel_crc32c_C2 00:06:07.307 ************************************ 00:06:07.307 11:51:54 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:07.307 11:51:54 accel -- accel/accel.sh@103 -- # run_test accel_copy accel_test -t 1 -w copy -y 00:06:07.307 11:51:54 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:07.307 11:51:54 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:07.307 11:51:54 accel -- common/autotest_common.sh@10 -- # set +x 00:06:07.307 ************************************ 00:06:07.307 START TEST accel_copy 00:06:07.307 ************************************ 00:06:07.307 11:51:54 accel.accel_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy -y 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@17 -- # local accel_module 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy -y 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy -y 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@41 -- # jq -r . 00:06:07.307 [2024-07-25 11:51:54.379541] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:07.307 [2024-07-25 11:51:54.379587] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159632 ] 00:06:07.307 EAL: No free 2048 kB hugepages reported on node 1 00:06:07.307 [2024-07-25 11:51:54.432995] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:07.307 [2024-07-25 11:51:54.505099] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val=0x1 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val=copy 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@23 -- # accel_opc=copy 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val=software 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.307 11:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val=32 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val=1 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val=Yes 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:07.566 11:51:54 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.503 11:51:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.503 11:51:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@20 -- # val= 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@19 -- # IFS=: 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@19 -- # read -r var val 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ -n copy ]] 00:06:08.504 11:51:55 accel.accel_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:08.504 00:06:08.504 real 0m1.330s 00:06:08.504 user 0m1.230s 00:06:08.504 sys 0m0.114s 00:06:08.504 11:51:55 accel.accel_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:08.504 11:51:55 accel.accel_copy -- common/autotest_common.sh@10 -- # set +x 00:06:08.504 ************************************ 00:06:08.504 END TEST accel_copy 00:06:08.504 ************************************ 00:06:08.504 11:51:55 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:08.504 11:51:55 accel -- accel/accel.sh@104 -- # run_test accel_fill accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:08.504 11:51:55 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:08.504 11:51:55 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:08.504 11:51:55 accel -- common/autotest_common.sh@10 -- # set +x 00:06:08.504 ************************************ 00:06:08.504 START TEST accel_fill 00:06:08.504 ************************************ 00:06:08.504 11:51:55 accel.accel_fill -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:08.504 11:51:55 accel.accel_fill -- accel/accel.sh@16 -- # local accel_opc 00:06:08.504 11:51:55 accel.accel_fill -- accel/accel.sh@17 -- # local accel_module 00:06:08.504 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.504 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.504 11:51:55 accel.accel_fill -- accel/accel.sh@15 -- # accel_perf -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:08.504 11:51:55 accel.accel_fill -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w fill -f 128 -q 64 -a 64 -y 00:06:08.504 11:51:55 accel.accel_fill -- accel/accel.sh@12 -- # build_accel_config 00:06:08.504 11:51:55 accel.accel_fill -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:08.504 11:51:55 accel.accel_fill -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:08.504 11:51:55 accel.accel_fill -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:08.504 11:51:55 accel.accel_fill -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:08.504 11:51:55 accel.accel_fill -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:08.763 11:51:55 accel.accel_fill -- accel/accel.sh@40 -- # local IFS=, 00:06:08.763 11:51:55 accel.accel_fill -- accel/accel.sh@41 -- # jq -r . 00:06:08.763 [2024-07-25 11:51:55.775550] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:08.763 [2024-07-25 11:51:55.775616] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid159896 ] 00:06:08.763 EAL: No free 2048 kB hugepages reported on node 1 00:06:08.763 [2024-07-25 11:51:55.832137] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:08.763 [2024-07-25 11:51:55.906230] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:08.763 11:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.763 11:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.763 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.763 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.763 11:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.763 11:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.763 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.763 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.763 11:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val=0x1 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val=fill 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@23 -- # accel_opc=fill 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val=0x80 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val=software 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@22 -- # accel_module=software 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val=64 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val=1 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val='1 seconds' 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val=Yes 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:08.764 11:51:55 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@20 -- # val= 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@21 -- # case "$var" in 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@19 -- # IFS=: 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@19 -- # read -r var val 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@27 -- # [[ -n fill ]] 00:06:10.205 11:51:57 accel.accel_fill -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:10.205 00:06:10.205 real 0m1.339s 00:06:10.205 user 0m1.243s 00:06:10.205 sys 0m0.110s 00:06:10.205 11:51:57 accel.accel_fill -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:10.205 11:51:57 accel.accel_fill -- common/autotest_common.sh@10 -- # set +x 00:06:10.205 ************************************ 00:06:10.205 END TEST accel_fill 00:06:10.205 ************************************ 00:06:10.205 11:51:57 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:10.205 11:51:57 accel -- accel/accel.sh@105 -- # run_test accel_copy_crc32c accel_test -t 1 -w copy_crc32c -y 00:06:10.205 11:51:57 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:10.205 11:51:57 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:10.205 11:51:57 accel -- common/autotest_common.sh@10 -- # set +x 00:06:10.205 ************************************ 00:06:10.205 START TEST accel_copy_crc32c 00:06:10.206 ************************************ 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@16 -- # local accel_opc 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@17 -- # local accel_module 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@12 -- # build_accel_config 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@40 -- # local IFS=, 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@41 -- # jq -r . 00:06:10.206 [2024-07-25 11:51:57.175963] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:10.206 [2024-07-25 11:51:57.176030] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160161 ] 00:06:10.206 EAL: No free 2048 kB hugepages reported on node 1 00:06:10.206 [2024-07-25 11:51:57.231370] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:10.206 [2024-07-25 11:51:57.304053] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0x1 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=0 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=software 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@22 -- # accel_module=software 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=32 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=1 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val='1 seconds' 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val=Yes 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:10.206 11:51:57 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@20 -- # val= 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@21 -- # case "$var" in 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # IFS=: 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@19 -- # read -r var val 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:11.587 00:06:11.587 real 0m1.337s 00:06:11.587 user 0m1.231s 00:06:11.587 sys 0m0.120s 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:11.587 11:51:58 accel.accel_copy_crc32c -- common/autotest_common.sh@10 -- # set +x 00:06:11.587 ************************************ 00:06:11.587 END TEST accel_copy_crc32c 00:06:11.587 ************************************ 00:06:11.587 11:51:58 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:11.587 11:51:58 accel -- accel/accel.sh@106 -- # run_test accel_copy_crc32c_C2 accel_test -t 1 -w copy_crc32c -y -C 2 00:06:11.587 11:51:58 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:11.587 11:51:58 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:11.587 11:51:58 accel -- common/autotest_common.sh@10 -- # set +x 00:06:11.587 ************************************ 00:06:11.587 START TEST accel_copy_crc32c_C2 00:06:11.587 ************************************ 00:06:11.587 11:51:58 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w copy_crc32c -y -C 2 00:06:11.587 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@16 -- # local accel_opc 00:06:11.587 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@17 -- # local accel_module 00:06:11.587 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.587 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@15 -- # accel_perf -t 1 -w copy_crc32c -y -C 2 00:06:11.587 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.587 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w copy_crc32c -y -C 2 00:06:11.587 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@12 -- # build_accel_config 00:06:11.587 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:11.587 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@40 -- # local IFS=, 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@41 -- # jq -r . 00:06:11.588 [2024-07-25 11:51:58.575363] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:11.588 [2024-07-25 11:51:58.575427] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160425 ] 00:06:11.588 EAL: No free 2048 kB hugepages reported on node 1 00:06:11.588 [2024-07-25 11:51:58.631195] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:11.588 [2024-07-25 11:51:58.705285] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0x1 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=copy_crc32c 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@23 -- # accel_opc=copy_crc32c 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=0 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='8192 bytes' 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=software 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@22 -- # accel_module=software 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=32 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=1 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val='1 seconds' 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val=Yes 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:11.588 11:51:58 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@20 -- # val= 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@21 -- # case "$var" in 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # IFS=: 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@19 -- # read -r var val 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ -n copy_crc32c ]] 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:12.970 00:06:12.970 real 0m1.338s 00:06:12.970 user 0m1.246s 00:06:12.970 sys 0m0.107s 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:12.970 11:51:59 accel.accel_copy_crc32c_C2 -- common/autotest_common.sh@10 -- # set +x 00:06:12.970 ************************************ 00:06:12.970 END TEST accel_copy_crc32c_C2 00:06:12.970 ************************************ 00:06:12.970 11:51:59 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:12.970 11:51:59 accel -- accel/accel.sh@107 -- # run_test accel_dualcast accel_test -t 1 -w dualcast -y 00:06:12.970 11:51:59 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:12.970 11:51:59 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:12.970 11:51:59 accel -- common/autotest_common.sh@10 -- # set +x 00:06:12.970 ************************************ 00:06:12.970 START TEST accel_dualcast 00:06:12.970 ************************************ 00:06:12.970 11:51:59 accel.accel_dualcast -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dualcast -y 00:06:12.970 11:51:59 accel.accel_dualcast -- accel/accel.sh@16 -- # local accel_opc 00:06:12.970 11:51:59 accel.accel_dualcast -- accel/accel.sh@17 -- # local accel_module 00:06:12.970 11:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.970 11:51:59 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.970 11:51:59 accel.accel_dualcast -- accel/accel.sh@15 -- # accel_perf -t 1 -w dualcast -y 00:06:12.970 11:51:59 accel.accel_dualcast -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dualcast -y 00:06:12.970 11:51:59 accel.accel_dualcast -- accel/accel.sh@12 -- # build_accel_config 00:06:12.970 11:51:59 accel.accel_dualcast -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:12.970 11:51:59 accel.accel_dualcast -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:12.970 11:51:59 accel.accel_dualcast -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:12.970 11:51:59 accel.accel_dualcast -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:12.970 11:51:59 accel.accel_dualcast -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:12.970 11:51:59 accel.accel_dualcast -- accel/accel.sh@40 -- # local IFS=, 00:06:12.970 11:51:59 accel.accel_dualcast -- accel/accel.sh@41 -- # jq -r . 00:06:12.970 [2024-07-25 11:51:59.977228] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:12.970 [2024-07-25 11:51:59.977293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160690 ] 00:06:12.970 EAL: No free 2048 kB hugepages reported on node 1 00:06:12.971 [2024-07-25 11:52:00.035144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:12.971 [2024-07-25 11:52:00.107273] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=0x1 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=dualcast 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@23 -- # accel_opc=dualcast 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=software 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@22 -- # accel_module=software 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=32 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=1 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val='1 seconds' 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val=Yes 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:12.971 11:52:00 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@20 -- # val= 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ -n dualcast ]] 00:06:14.351 11:52:01 accel.accel_dualcast -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:14.351 00:06:14.351 real 0m1.339s 00:06:14.351 user 0m1.241s 00:06:14.351 sys 0m0.110s 00:06:14.351 11:52:01 accel.accel_dualcast -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:14.351 11:52:01 accel.accel_dualcast -- common/autotest_common.sh@10 -- # set +x 00:06:14.351 ************************************ 00:06:14.351 END TEST accel_dualcast 00:06:14.351 ************************************ 00:06:14.351 11:52:01 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:14.351 11:52:01 accel -- accel/accel.sh@108 -- # run_test accel_compare accel_test -t 1 -w compare -y 00:06:14.351 11:52:01 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:14.351 11:52:01 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:14.351 11:52:01 accel -- common/autotest_common.sh@10 -- # set +x 00:06:14.351 ************************************ 00:06:14.351 START TEST accel_compare 00:06:14.351 ************************************ 00:06:14.351 11:52:01 accel.accel_compare -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compare -y 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@16 -- # local accel_opc 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@17 -- # local accel_module 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@15 -- # accel_perf -t 1 -w compare -y 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compare -y 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@12 -- # build_accel_config 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@40 -- # local IFS=, 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@41 -- # jq -r . 00:06:14.351 [2024-07-25 11:52:01.373820] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:14.351 [2024-07-25 11:52:01.373871] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid160949 ] 00:06:14.351 EAL: No free 2048 kB hugepages reported on node 1 00:06:14.351 [2024-07-25 11:52:01.427961] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:14.351 [2024-07-25 11:52:01.500263] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val=0x1 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val=compare 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@23 -- # accel_opc=compare 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val=software 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@22 -- # accel_module=software 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val=32 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val=1 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val='1 seconds' 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.351 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.352 11:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val=Yes 00:06:14.352 11:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.352 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.352 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.352 11:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:14.352 11:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.352 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.352 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:14.352 11:52:01 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:14.352 11:52:01 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:14.352 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:14.352 11:52:01 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.732 11:52:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.732 11:52:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.732 11:52:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.732 11:52:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.732 11:52:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.732 11:52:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.732 11:52:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.732 11:52:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.732 11:52:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.732 11:52:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.732 11:52:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.732 11:52:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.732 11:52:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.732 11:52:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.732 11:52:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.732 11:52:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.733 11:52:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_compare -- accel/accel.sh@20 -- # val= 00:06:15.733 11:52:02 accel.accel_compare -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_compare -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_compare -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:15.733 11:52:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ -n compare ]] 00:06:15.733 11:52:02 accel.accel_compare -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:15.733 00:06:15.733 real 0m1.333s 00:06:15.733 user 0m1.241s 00:06:15.733 sys 0m0.106s 00:06:15.733 11:52:02 accel.accel_compare -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:15.733 11:52:02 accel.accel_compare -- common/autotest_common.sh@10 -- # set +x 00:06:15.733 ************************************ 00:06:15.733 END TEST accel_compare 00:06:15.733 ************************************ 00:06:15.733 11:52:02 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:15.733 11:52:02 accel -- accel/accel.sh@109 -- # run_test accel_xor accel_test -t 1 -w xor -y 00:06:15.733 11:52:02 accel -- common/autotest_common.sh@1099 -- # '[' 7 -le 1 ']' 00:06:15.733 11:52:02 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:15.733 11:52:02 accel -- common/autotest_common.sh@10 -- # set +x 00:06:15.733 ************************************ 00:06:15.733 START TEST accel_xor 00:06:15.733 ************************************ 00:06:15.733 11:52:02 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:15.733 [2024-07-25 11:52:02.767013] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:15.733 [2024-07-25 11:52:02.767085] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161203 ] 00:06:15.733 EAL: No free 2048 kB hugepages reported on node 1 00:06:15.733 [2024-07-25 11:52:02.822141] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:15.733 [2024-07-25 11:52:02.898638] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val=2 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:15.733 11:52:02 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:17.114 11:52:04 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:17.114 00:06:17.114 real 0m1.339s 00:06:17.114 user 0m1.243s 00:06:17.114 sys 0m0.108s 00:06:17.114 11:52:04 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:17.114 11:52:04 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:17.114 ************************************ 00:06:17.114 END TEST accel_xor 00:06:17.114 ************************************ 00:06:17.114 11:52:04 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:17.114 11:52:04 accel -- accel/accel.sh@110 -- # run_test accel_xor accel_test -t 1 -w xor -y -x 3 00:06:17.114 11:52:04 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:17.114 11:52:04 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:17.114 11:52:04 accel -- common/autotest_common.sh@10 -- # set +x 00:06:17.114 ************************************ 00:06:17.114 START TEST accel_xor 00:06:17.114 ************************************ 00:06:17.115 11:52:04 accel.accel_xor -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w xor -y -x 3 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@16 -- # local accel_opc 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@17 -- # local accel_module 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@15 -- # accel_perf -t 1 -w xor -y -x 3 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w xor -y -x 3 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@12 -- # build_accel_config 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@40 -- # local IFS=, 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@41 -- # jq -r . 00:06:17.115 [2024-07-25 11:52:04.158418] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:17.115 [2024-07-25 11:52:04.158465] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161463 ] 00:06:17.115 EAL: No free 2048 kB hugepages reported on node 1 00:06:17.115 [2024-07-25 11:52:04.211262] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:17.115 [2024-07-25 11:52:04.283180] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val=0x1 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val=xor 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@23 -- # accel_opc=xor 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val=3 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val=software 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@22 -- # accel_module=software 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val=32 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val=1 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val='1 seconds' 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val=Yes 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:17.115 11:52:04 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.495 11:52:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.495 11:52:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.495 11:52:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.495 11:52:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.495 11:52:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.495 11:52:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.495 11:52:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.495 11:52:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.495 11:52:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.495 11:52:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.495 11:52:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.495 11:52:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.495 11:52:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.496 11:52:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.496 11:52:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_xor -- accel/accel.sh@20 -- # val= 00:06:18.496 11:52:05 accel.accel_xor -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_xor -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_xor -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:18.496 11:52:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ -n xor ]] 00:06:18.496 11:52:05 accel.accel_xor -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:18.496 00:06:18.496 real 0m1.324s 00:06:18.496 user 0m1.238s 00:06:18.496 sys 0m0.101s 00:06:18.496 11:52:05 accel.accel_xor -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:18.496 11:52:05 accel.accel_xor -- common/autotest_common.sh@10 -- # set +x 00:06:18.496 ************************************ 00:06:18.496 END TEST accel_xor 00:06:18.496 ************************************ 00:06:18.496 11:52:05 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:18.496 11:52:05 accel -- accel/accel.sh@111 -- # run_test accel_dif_verify accel_test -t 1 -w dif_verify 00:06:18.496 11:52:05 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:18.496 11:52:05 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:18.496 11:52:05 accel -- common/autotest_common.sh@10 -- # set +x 00:06:18.496 ************************************ 00:06:18.496 START TEST accel_dif_verify 00:06:18.496 ************************************ 00:06:18.496 11:52:05 accel.accel_dif_verify -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_verify 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@16 -- # local accel_opc 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@17 -- # local accel_module 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_verify 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_verify 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@12 -- # build_accel_config 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@40 -- # local IFS=, 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@41 -- # jq -r . 00:06:18.496 [2024-07-25 11:52:05.548129] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:18.496 [2024-07-25 11:52:05.548180] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161713 ] 00:06:18.496 EAL: No free 2048 kB hugepages reported on node 1 00:06:18.496 [2024-07-25 11:52:05.601652] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:18.496 [2024-07-25 11:52:05.674341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=0x1 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=dif_verify 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@23 -- # accel_opc=dif_verify 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='512 bytes' 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='8 bytes' 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=software 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@22 -- # accel_module=software 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=32 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=1 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val='1 seconds' 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val=No 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:18.496 11:52:05 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@20 -- # val= 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@21 -- # case "$var" in 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # IFS=: 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@19 -- # read -r var val 00:06:19.876 11:52:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:19.877 11:52:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ -n dif_verify ]] 00:06:19.877 11:52:06 accel.accel_dif_verify -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:19.877 00:06:19.877 real 0m1.333s 00:06:19.877 user 0m1.240s 00:06:19.877 sys 0m0.108s 00:06:19.877 11:52:06 accel.accel_dif_verify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:19.877 11:52:06 accel.accel_dif_verify -- common/autotest_common.sh@10 -- # set +x 00:06:19.877 ************************************ 00:06:19.877 END TEST accel_dif_verify 00:06:19.877 ************************************ 00:06:19.877 11:52:06 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:19.877 11:52:06 accel -- accel/accel.sh@112 -- # run_test accel_dif_generate accel_test -t 1 -w dif_generate 00:06:19.877 11:52:06 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:19.877 11:52:06 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:19.877 11:52:06 accel -- common/autotest_common.sh@10 -- # set +x 00:06:19.877 ************************************ 00:06:19.877 START TEST accel_dif_generate 00:06:19.877 ************************************ 00:06:19.877 11:52:06 accel.accel_dif_generate -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate 00:06:19.877 11:52:06 accel.accel_dif_generate -- accel/accel.sh@16 -- # local accel_opc 00:06:19.877 11:52:06 accel.accel_dif_generate -- accel/accel.sh@17 -- # local accel_module 00:06:19.877 11:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:06 accel.accel_dif_generate -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate 00:06:19.877 11:52:06 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate 00:06:19.877 11:52:06 accel.accel_dif_generate -- accel/accel.sh@12 -- # build_accel_config 00:06:19.877 11:52:06 accel.accel_dif_generate -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:19.877 11:52:06 accel.accel_dif_generate -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:19.877 11:52:06 accel.accel_dif_generate -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:19.877 11:52:06 accel.accel_dif_generate -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:19.877 11:52:06 accel.accel_dif_generate -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:19.877 11:52:06 accel.accel_dif_generate -- accel/accel.sh@40 -- # local IFS=, 00:06:19.877 11:52:06 accel.accel_dif_generate -- accel/accel.sh@41 -- # jq -r . 00:06:19.877 [2024-07-25 11:52:06.937658] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:19.877 [2024-07-25 11:52:06.937708] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid161973 ] 00:06:19.877 EAL: No free 2048 kB hugepages reported on node 1 00:06:19.877 [2024-07-25 11:52:06.992168] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:19.877 [2024-07-25 11:52:07.064335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=0x1 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=dif_generate 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@23 -- # accel_opc=dif_generate 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='512 bytes' 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='8 bytes' 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=software 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@22 -- # accel_module=software 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=32 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=1 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val='1 seconds' 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val=No 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:19.877 11:52:07 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@20 -- # val= 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@21 -- # case "$var" in 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # IFS=: 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@19 -- # read -r var val 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ -n dif_generate ]] 00:06:21.255 11:52:08 accel.accel_dif_generate -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:21.255 00:06:21.255 real 0m1.328s 00:06:21.255 user 0m1.229s 00:06:21.255 sys 0m0.116s 00:06:21.255 11:52:08 accel.accel_dif_generate -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:21.255 11:52:08 accel.accel_dif_generate -- common/autotest_common.sh@10 -- # set +x 00:06:21.255 ************************************ 00:06:21.255 END TEST accel_dif_generate 00:06:21.255 ************************************ 00:06:21.255 11:52:08 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:21.255 11:52:08 accel -- accel/accel.sh@113 -- # run_test accel_dif_generate_copy accel_test -t 1 -w dif_generate_copy 00:06:21.255 11:52:08 accel -- common/autotest_common.sh@1099 -- # '[' 6 -le 1 ']' 00:06:21.255 11:52:08 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:21.255 11:52:08 accel -- common/autotest_common.sh@10 -- # set +x 00:06:21.255 ************************************ 00:06:21.255 START TEST accel_dif_generate_copy 00:06:21.255 ************************************ 00:06:21.255 11:52:08 accel.accel_dif_generate_copy -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w dif_generate_copy 00:06:21.255 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@16 -- # local accel_opc 00:06:21.255 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@17 -- # local accel_module 00:06:21.255 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.255 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.255 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@15 -- # accel_perf -t 1 -w dif_generate_copy 00:06:21.255 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w dif_generate_copy 00:06:21.255 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@12 -- # build_accel_config 00:06:21.255 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:21.255 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:21.255 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:21.255 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:21.255 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:21.256 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@40 -- # local IFS=, 00:06:21.256 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@41 -- # jq -r . 00:06:21.256 [2024-07-25 11:52:08.335173] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:21.256 [2024-07-25 11:52:08.335220] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162224 ] 00:06:21.256 EAL: No free 2048 kB hugepages reported on node 1 00:06:21.256 [2024-07-25 11:52:08.389307] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.256 [2024-07-25 11:52:08.461369] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.256 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.256 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.256 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.256 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.256 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.256 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.256 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.256 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.256 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=0x1 00:06:21.256 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.256 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=dif_generate_copy 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@23 -- # accel_opc=dif_generate_copy 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=software 00:06:21.515 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@22 -- # accel_module=software 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=32 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=1 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val='1 seconds' 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val=No 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:21.516 11:52:08 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@20 -- # val= 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@21 -- # case "$var" in 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # IFS=: 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@19 -- # read -r var val 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ -n dif_generate_copy ]] 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:22.454 00:06:22.454 real 0m1.334s 00:06:22.454 user 0m1.239s 00:06:22.454 sys 0m0.108s 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:22.454 11:52:09 accel.accel_dif_generate_copy -- common/autotest_common.sh@10 -- # set +x 00:06:22.454 ************************************ 00:06:22.454 END TEST accel_dif_generate_copy 00:06:22.454 ************************************ 00:06:22.454 11:52:09 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:22.454 11:52:09 accel -- accel/accel.sh@115 -- # [[ y == y ]] 00:06:22.454 11:52:09 accel -- accel/accel.sh@116 -- # run_test accel_comp accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.454 11:52:09 accel -- common/autotest_common.sh@1099 -- # '[' 8 -le 1 ']' 00:06:22.454 11:52:09 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:22.454 11:52:09 accel -- common/autotest_common.sh@10 -- # set +x 00:06:22.714 ************************************ 00:06:22.714 START TEST accel_comp 00:06:22.714 ************************************ 00:06:22.714 11:52:09 accel.accel_comp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@16 -- # local accel_opc 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@17 -- # local accel_module 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@15 -- # accel_perf -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w compress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@12 -- # build_accel_config 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@40 -- # local IFS=, 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@41 -- # jq -r . 00:06:22.714 [2024-07-25 11:52:09.730671] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:22.714 [2024-07-25 11:52:09.730718] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162475 ] 00:06:22.714 EAL: No free 2048 kB hugepages reported on node 1 00:06:22.714 [2024-07-25 11:52:09.784707] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:22.714 [2024-07-25 11:52:09.856773] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val=0x1 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val=compress 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@23 -- # accel_opc=compress 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val=software 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@22 -- # accel_module=software 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val=32 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.714 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val=1 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val=No 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:22.715 11:52:09 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@20 -- # val= 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@19 -- # IFS=: 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@19 -- # read -r var val 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ -n compress ]] 00:06:24.092 11:52:11 accel.accel_comp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:24.092 00:06:24.092 real 0m1.336s 00:06:24.092 user 0m1.243s 00:06:24.092 sys 0m0.107s 00:06:24.092 11:52:11 accel.accel_comp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:24.092 11:52:11 accel.accel_comp -- common/autotest_common.sh@10 -- # set +x 00:06:24.092 ************************************ 00:06:24.092 END TEST accel_comp 00:06:24.092 ************************************ 00:06:24.092 11:52:11 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:24.092 11:52:11 accel -- accel/accel.sh@117 -- # run_test accel_decomp accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.092 11:52:11 accel -- common/autotest_common.sh@1099 -- # '[' 9 -le 1 ']' 00:06:24.092 11:52:11 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:24.092 11:52:11 accel -- common/autotest_common.sh@10 -- # set +x 00:06:24.092 ************************************ 00:06:24.092 START TEST accel_decomp 00:06:24.092 ************************************ 00:06:24.092 11:52:11 accel.accel_decomp -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.092 11:52:11 accel.accel_decomp -- accel/accel.sh@16 -- # local accel_opc 00:06:24.092 11:52:11 accel.accel_decomp -- accel/accel.sh@17 -- # local accel_module 00:06:24.092 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.092 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.092 11:52:11 accel.accel_decomp -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@12 -- # build_accel_config 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@40 -- # local IFS=, 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@41 -- # jq -r . 00:06:24.093 [2024-07-25 11:52:11.126011] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:24.093 [2024-07-25 11:52:11.126064] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162723 ] 00:06:24.093 EAL: No free 2048 kB hugepages reported on node 1 00:06:24.093 [2024-07-25 11:52:11.179481] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.093 [2024-07-25 11:52:11.251555] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=0x1 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=decompress 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=software 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@22 -- # accel_module=software 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=32 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=1 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val='1 seconds' 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val=Yes 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:24.093 11:52:11 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@20 -- # val= 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:25.471 11:52:12 accel.accel_decomp -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:25.471 00:06:25.471 real 0m1.335s 00:06:25.471 user 0m1.237s 00:06:25.471 sys 0m0.112s 00:06:25.471 11:52:12 accel.accel_decomp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:25.471 11:52:12 accel.accel_decomp -- common/autotest_common.sh@10 -- # set +x 00:06:25.471 ************************************ 00:06:25.471 END TEST accel_decomp 00:06:25.471 ************************************ 00:06:25.471 11:52:12 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:25.471 11:52:12 accel -- accel/accel.sh@118 -- # run_test accel_decomp_full accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:25.471 11:52:12 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:25.471 11:52:12 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:25.471 11:52:12 accel -- common/autotest_common.sh@10 -- # set +x 00:06:25.471 ************************************ 00:06:25.471 START TEST accel_decomp_full 00:06:25.471 ************************************ 00:06:25.471 11:52:12 accel.accel_decomp_full -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@16 -- # local accel_opc 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@17 -- # local accel_module 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@12 -- # build_accel_config 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@40 -- # local IFS=, 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@41 -- # jq -r . 00:06:25.471 [2024-07-25 11:52:12.519192] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:25.471 [2024-07-25 11:52:12.519245] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid162969 ] 00:06:25.471 EAL: No free 2048 kB hugepages reported on node 1 00:06:25.471 [2024-07-25 11:52:12.573570] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.471 [2024-07-25 11:52:12.646447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=0x1 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=decompress 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=software 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@22 -- # accel_module=software 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=32 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.471 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=1 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val='1 seconds' 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val=Yes 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:25.472 11:52:12 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@20 -- # val= 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@21 -- # case "$var" in 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # IFS=: 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@19 -- # read -r var val 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:26.884 11:52:13 accel.accel_decomp_full -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:26.884 00:06:26.884 real 0m1.346s 00:06:26.884 user 0m1.244s 00:06:26.884 sys 0m0.116s 00:06:26.884 11:52:13 accel.accel_decomp_full -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:26.884 11:52:13 accel.accel_decomp_full -- common/autotest_common.sh@10 -- # set +x 00:06:26.884 ************************************ 00:06:26.884 END TEST accel_decomp_full 00:06:26.884 ************************************ 00:06:26.884 11:52:13 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:26.884 11:52:13 accel -- accel/accel.sh@119 -- # run_test accel_decomp_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:26.884 11:52:13 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:26.884 11:52:13 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:26.884 11:52:13 accel -- common/autotest_common.sh@10 -- # set +x 00:06:26.884 ************************************ 00:06:26.884 START TEST accel_decomp_mcore 00:06:26.884 ************************************ 00:06:26.884 11:52:13 accel.accel_decomp_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:26.884 11:52:13 accel.accel_decomp_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:26.884 11:52:13 accel.accel_decomp_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:26.884 11:52:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:26.884 11:52:13 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:26.884 11:52:13 accel.accel_decomp_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:26.885 11:52:13 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -m 0xf 00:06:26.885 11:52:13 accel.accel_decomp_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:26.885 11:52:13 accel.accel_decomp_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:26.885 11:52:13 accel.accel_decomp_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:26.885 11:52:13 accel.accel_decomp_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:26.885 11:52:13 accel.accel_decomp_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:26.885 11:52:13 accel.accel_decomp_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:26.885 11:52:13 accel.accel_decomp_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:26.885 11:52:13 accel.accel_decomp_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:26.885 [2024-07-25 11:52:13.928072] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:26.885 [2024-07-25 11:52:13.928123] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163242 ] 00:06:26.885 EAL: No free 2048 kB hugepages reported on node 1 00:06:26.885 [2024-07-25 11:52:13.984505] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:26.885 [2024-07-25 11:52:14.058994] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:26.885 [2024-07-25 11:52:14.059093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:26.885 [2024-07-25 11:52:14.059117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:26.885 [2024-07-25 11:52:14.059119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:27.148 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.148 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.148 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.148 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.148 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.148 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.148 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.148 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.148 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.148 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.148 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=software 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=32 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=1 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:27.149 11:52:14 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.083 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.084 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.084 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.084 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.084 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.084 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.084 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.084 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.084 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@20 -- # val= 00:06:28.084 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.084 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.084 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.084 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:28.084 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:28.084 11:52:15 accel.accel_decomp_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:28.084 00:06:28.084 real 0m1.350s 00:06:28.084 user 0m4.564s 00:06:28.084 sys 0m0.126s 00:06:28.084 11:52:15 accel.accel_decomp_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:28.084 11:52:15 accel.accel_decomp_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:28.084 ************************************ 00:06:28.084 END TEST accel_decomp_mcore 00:06:28.084 ************************************ 00:06:28.084 11:52:15 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:28.084 11:52:15 accel -- accel/accel.sh@120 -- # run_test accel_decomp_full_mcore accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.084 11:52:15 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:28.084 11:52:15 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:28.084 11:52:15 accel -- common/autotest_common.sh@10 -- # set +x 00:06:28.084 ************************************ 00:06:28.084 START TEST accel_decomp_full_mcore 00:06:28.084 ************************************ 00:06:28.084 11:52:15 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.084 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@16 -- # local accel_opc 00:06:28.084 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@17 -- # local accel_module 00:06:28.084 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.084 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.084 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.084 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -m 0xf 00:06:28.084 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@12 -- # build_accel_config 00:06:28.084 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:28.084 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:28.084 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:28.084 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:28.084 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:28.084 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@40 -- # local IFS=, 00:06:28.084 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@41 -- # jq -r . 00:06:28.084 [2024-07-25 11:52:15.327874] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:28.084 [2024-07-25 11:52:15.327910] [ DPDK EAL parameters: accel_perf --no-shconf -c 0xf --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163496 ] 00:06:28.345 EAL: No free 2048 kB hugepages reported on node 1 00:06:28.345 [2024-07-25 11:52:15.380789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:06:28.345 [2024-07-25 11:52:15.455651] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:28.345 [2024-07-25 11:52:15.455748] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:28.345 [2024-07-25 11:52:15.455845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:28.345 [2024-07-25 11:52:15.455846] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=0xf 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=decompress 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=software 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@22 -- # accel_module=software 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=32 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=1 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val='1 seconds' 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val=Yes 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:28.345 11:52:15 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@20 -- # val= 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@21 -- # case "$var" in 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # IFS=: 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@19 -- # read -r var val 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:29.723 00:06:29.723 real 0m1.343s 00:06:29.723 user 0m4.584s 00:06:29.723 sys 0m0.123s 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:29.723 11:52:16 accel.accel_decomp_full_mcore -- common/autotest_common.sh@10 -- # set +x 00:06:29.723 ************************************ 00:06:29.723 END TEST accel_decomp_full_mcore 00:06:29.723 ************************************ 00:06:29.723 11:52:16 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:29.723 11:52:16 accel -- accel/accel.sh@121 -- # run_test accel_decomp_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.723 11:52:16 accel -- common/autotest_common.sh@1099 -- # '[' 11 -le 1 ']' 00:06:29.723 11:52:16 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:29.723 11:52:16 accel -- common/autotest_common.sh@10 -- # set +x 00:06:29.723 ************************************ 00:06:29.723 START TEST accel_decomp_mthread 00:06:29.723 ************************************ 00:06:29.723 11:52:16 accel.accel_decomp_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.723 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:29.723 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:29.723 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.723 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.723 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.723 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -T 2 00:06:29.723 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:29.723 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:29.723 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:29.723 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:29.723 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:29.723 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:29.723 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:29.723 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:29.723 [2024-07-25 11:52:16.733605] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:29.723 [2024-07-25 11:52:16.733642] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid163746 ] 00:06:29.723 EAL: No free 2048 kB hugepages reported on node 1 00:06:29.724 [2024-07-25 11:52:16.781325] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:29.724 [2024-07-25 11:52:16.855239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='4096 bytes' 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=software 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=32 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=2 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:29.724 11:52:16 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@20 -- # val= 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:31.102 00:06:31.102 real 0m1.324s 00:06:31.102 user 0m1.234s 00:06:31.102 sys 0m0.105s 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:31.102 11:52:18 accel.accel_decomp_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:31.102 ************************************ 00:06:31.102 END TEST accel_decomp_mthread 00:06:31.102 ************************************ 00:06:31.102 11:52:18 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:31.102 11:52:18 accel -- accel/accel.sh@122 -- # run_test accel_decomp_full_mthread accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.102 11:52:18 accel -- common/autotest_common.sh@1099 -- # '[' 13 -le 1 ']' 00:06:31.102 11:52:18 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:31.102 11:52:18 accel -- common/autotest_common.sh@10 -- # set +x 00:06:31.102 ************************************ 00:06:31.102 START TEST accel_decomp_full_mthread 00:06:31.102 ************************************ 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1123 -- # accel_test -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@16 -- # local accel_opc 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@17 -- # local accel_module 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@15 -- # accel_perf -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/accel_perf -c /dev/fd/62 -t 1 -w decompress -l /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib -y -o 0 -T 2 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@12 -- # build_accel_config 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@40 -- # local IFS=, 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@41 -- # jq -r . 00:06:31.102 [2024-07-25 11:52:18.128246] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:31.102 [2024-07-25 11:52:18.128293] [ DPDK EAL parameters: accel_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164001 ] 00:06:31.102 EAL: No free 2048 kB hugepages reported on node 1 00:06:31.102 [2024-07-25 11:52:18.181839] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:31.102 [2024-07-25 11:52:18.253934] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=0x1 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=decompress 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@23 -- # accel_opc=decompress 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='111250 bytes' 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=software 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@22 -- # accel_module=software 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/bib 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=32 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=2 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val='1 seconds' 00:06:31.102 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.103 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.103 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.103 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val=Yes 00:06:31.103 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.103 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.103 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.103 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.103 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.103 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.103 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:31.103 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:31.103 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:31.103 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:31.103 11:52:18 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@20 -- # val= 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@21 -- # case "$var" in 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # IFS=: 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@19 -- # read -r var val 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n software ]] 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ -n decompress ]] 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- accel/accel.sh@27 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:06:32.478 00:06:32.478 real 0m1.358s 00:06:32.478 user 0m1.257s 00:06:32.478 sys 0m0.115s 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.478 11:52:19 accel.accel_decomp_full_mthread -- common/autotest_common.sh@10 -- # set +x 00:06:32.478 ************************************ 00:06:32.478 END TEST accel_decomp_full_mthread 00:06:32.478 ************************************ 00:06:32.478 11:52:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.478 11:52:19 accel -- accel/accel.sh@124 -- # [[ n == y ]] 00:06:32.478 11:52:19 accel -- accel/accel.sh@137 -- # run_test accel_dif_functional_tests /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:32.478 11:52:19 accel -- accel/accel.sh@137 -- # build_accel_config 00:06:32.478 11:52:19 accel -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:06:32.478 11:52:19 accel -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.478 11:52:19 accel -- accel/accel.sh@31 -- # accel_json_cfg=() 00:06:32.478 11:52:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.478 11:52:19 accel -- accel/accel.sh@32 -- # [[ 0 -gt 0 ]] 00:06:32.478 11:52:19 accel -- accel/accel.sh@33 -- # [[ 0 -gt 0 ]] 00:06:32.478 11:52:19 accel -- accel/accel.sh@34 -- # [[ 0 -gt 0 ]] 00:06:32.478 11:52:19 accel -- accel/accel.sh@36 -- # [[ -n '' ]] 00:06:32.478 11:52:19 accel -- accel/accel.sh@40 -- # local IFS=, 00:06:32.478 11:52:19 accel -- accel/accel.sh@41 -- # jq -r . 00:06:32.478 ************************************ 00:06:32.478 START TEST accel_dif_functional_tests 00:06:32.478 ************************************ 00:06:32.478 11:52:19 accel.accel_dif_functional_tests -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/dif/dif -c /dev/fd/62 00:06:32.478 [2024-07-25 11:52:19.561252] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:32.478 [2024-07-25 11:52:19.561290] [ DPDK EAL parameters: DIF --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164247 ] 00:06:32.478 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.478 [2024-07-25 11:52:19.612930] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:32.478 [2024-07-25 11:52:19.686986] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:32.478 [2024-07-25 11:52:19.687082] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:32.478 [2024-07-25 11:52:19.687085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:32.738 00:06:32.738 00:06:32.738 CUnit - A unit testing framework for C - Version 2.1-3 00:06:32.738 http://cunit.sourceforge.net/ 00:06:32.738 00:06:32.738 00:06:32.738 Suite: accel_dif 00:06:32.738 Test: verify: DIF generated, GUARD check ...passed 00:06:32.738 Test: verify: DIF generated, APPTAG check ...passed 00:06:32.738 Test: verify: DIF generated, REFTAG check ...passed 00:06:32.738 Test: verify: DIF not generated, GUARD check ...[2024-07-25 11:52:19.755290] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:32.738 passed 00:06:32.738 Test: verify: DIF not generated, APPTAG check ...[2024-07-25 11:52:19.755337] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:32.738 passed 00:06:32.738 Test: verify: DIF not generated, REFTAG check ...[2024-07-25 11:52:19.755371] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:32.738 passed 00:06:32.738 Test: verify: APPTAG correct, APPTAG check ...passed 00:06:32.738 Test: verify: APPTAG incorrect, APPTAG check ...[2024-07-25 11:52:19.755416] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=30, Expected=28, Actual=14 00:06:32.738 passed 00:06:32.738 Test: verify: APPTAG incorrect, no APPTAG check ...passed 00:06:32.738 Test: verify: REFTAG incorrect, REFTAG ignore ...passed 00:06:32.738 Test: verify: REFTAG_INIT correct, REFTAG check ...passed 00:06:32.738 Test: verify: REFTAG_INIT incorrect, REFTAG check ...[2024-07-25 11:52:19.755515] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=10 00:06:32.738 passed 00:06:32.738 Test: verify copy: DIF generated, GUARD check ...passed 00:06:32.738 Test: verify copy: DIF generated, APPTAG check ...passed 00:06:32.738 Test: verify copy: DIF generated, REFTAG check ...passed 00:06:32.738 Test: verify copy: DIF not generated, GUARD check ...[2024-07-25 11:52:19.755623] dif.c: 861:_dif_verify: *ERROR*: Failed to compare Guard: LBA=10, Expected=5a5a, Actual=7867 00:06:32.738 passed 00:06:32.738 Test: verify copy: DIF not generated, APPTAG check ...[2024-07-25 11:52:19.755643] dif.c: 876:_dif_verify: *ERROR*: Failed to compare App Tag: LBA=10, Expected=14, Actual=5a5a 00:06:32.738 passed 00:06:32.738 Test: verify copy: DIF not generated, REFTAG check ...[2024-07-25 11:52:19.755669] dif.c: 811:_dif_reftag_check: *ERROR*: Failed to compare Ref Tag: LBA=10, Expected=a, Actual=5a5a5a5a 00:06:32.738 passed 00:06:32.738 Test: generate copy: DIF generated, GUARD check ...passed 00:06:32.738 Test: generate copy: DIF generated, APTTAG check ...passed 00:06:32.738 Test: generate copy: DIF generated, REFTAG check ...passed 00:06:32.738 Test: generate copy: DIF generated, no GUARD check flag set ...passed 00:06:32.738 Test: generate copy: DIF generated, no APPTAG check flag set ...passed 00:06:32.738 Test: generate copy: DIF generated, no REFTAG check flag set ...passed 00:06:32.738 Test: generate copy: iovecs-len validate ...[2024-07-25 11:52:19.755830] dif.c:1225:spdk_dif_generate_copy: *ERROR*: Size of bounce_iovs arrays are not valid or misaligned with block_size. 00:06:32.738 passed 00:06:32.738 Test: generate copy: buffer alignment validate ...passed 00:06:32.738 00:06:32.738 Run Summary: Type Total Ran Passed Failed Inactive 00:06:32.738 suites 1 1 n/a 0 0 00:06:32.738 tests 26 26 26 0 0 00:06:32.738 asserts 115 115 115 0 n/a 00:06:32.738 00:06:32.738 Elapsed time = 0.000 seconds 00:06:32.738 00:06:32.738 real 0m0.404s 00:06:32.738 user 0m0.619s 00:06:32.738 sys 0m0.141s 00:06:32.738 11:52:19 accel.accel_dif_functional_tests -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.738 11:52:19 accel.accel_dif_functional_tests -- common/autotest_common.sh@10 -- # set +x 00:06:32.738 ************************************ 00:06:32.738 END TEST accel_dif_functional_tests 00:06:32.738 ************************************ 00:06:32.738 11:52:19 accel -- common/autotest_common.sh@1142 -- # return 0 00:06:32.738 00:06:32.738 real 0m30.804s 00:06:32.738 user 0m34.772s 00:06:32.738 sys 0m4.063s 00:06:32.738 11:52:19 accel -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:32.738 11:52:19 accel -- common/autotest_common.sh@10 -- # set +x 00:06:32.738 ************************************ 00:06:32.738 END TEST accel 00:06:32.738 ************************************ 00:06:32.738 11:52:19 -- common/autotest_common.sh@1142 -- # return 0 00:06:32.738 11:52:19 -- spdk/autotest.sh@184 -- # run_test accel_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:32.738 11:52:19 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:32.738 11:52:19 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:32.738 11:52:19 -- common/autotest_common.sh@10 -- # set +x 00:06:32.997 ************************************ 00:06:32.997 START TEST accel_rpc 00:06:32.997 ************************************ 00:06:32.997 11:52:20 accel_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel/accel_rpc.sh 00:06:32.997 * Looking for test storage... 00:06:32.997 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/accel 00:06:32.997 11:52:20 accel_rpc -- accel/accel_rpc.sh@11 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:06:32.997 11:52:20 accel_rpc -- accel/accel_rpc.sh@14 -- # spdk_tgt_pid=164319 00:06:32.997 11:52:20 accel_rpc -- accel/accel_rpc.sh@15 -- # waitforlisten 164319 00:06:32.997 11:52:20 accel_rpc -- common/autotest_common.sh@829 -- # '[' -z 164319 ']' 00:06:32.997 11:52:20 accel_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:32.997 11:52:20 accel_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:32.997 11:52:20 accel_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:32.997 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:32.997 11:52:20 accel_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:32.997 11:52:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:32.997 11:52:20 accel_rpc -- accel/accel_rpc.sh@13 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --wait-for-rpc 00:06:32.997 [2024-07-25 11:52:20.153487] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:32.997 [2024-07-25 11:52:20.153535] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164319 ] 00:06:32.997 EAL: No free 2048 kB hugepages reported on node 1 00:06:32.997 [2024-07-25 11:52:20.205947] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:33.256 [2024-07-25 11:52:20.286380] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:33.825 11:52:20 accel_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:33.825 11:52:20 accel_rpc -- common/autotest_common.sh@862 -- # return 0 00:06:33.825 11:52:20 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ y == y ]] 00:06:33.825 11:52:20 accel_rpc -- accel/accel_rpc.sh@45 -- # [[ 0 -gt 0 ]] 00:06:33.825 11:52:20 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ y == y ]] 00:06:33.825 11:52:20 accel_rpc -- accel/accel_rpc.sh@49 -- # [[ 0 -gt 0 ]] 00:06:33.825 11:52:20 accel_rpc -- accel/accel_rpc.sh@53 -- # run_test accel_assign_opcode accel_assign_opcode_test_suite 00:06:33.825 11:52:20 accel_rpc -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:33.825 11:52:20 accel_rpc -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:33.825 11:52:20 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:33.825 ************************************ 00:06:33.825 START TEST accel_assign_opcode 00:06:33.825 ************************************ 00:06:33.825 11:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1123 -- # accel_assign_opcode_test_suite 00:06:33.825 11:52:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@38 -- # rpc_cmd accel_assign_opc -o copy -m incorrect 00:06:33.825 11:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.825 11:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.825 [2024-07-25 11:52:20.956371] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module incorrect 00:06:33.825 11:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.825 11:52:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@40 -- # rpc_cmd accel_assign_opc -o copy -m software 00:06:33.825 11:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.825 11:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:33.825 [2024-07-25 11:52:20.964384] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation copy will be assigned to module software 00:06:33.825 11:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:33.825 11:52:20 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@41 -- # rpc_cmd framework_start_init 00:06:33.825 11:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:33.825 11:52:20 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:34.085 11:52:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.085 11:52:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # rpc_cmd accel_get_opc_assignments 00:06:34.085 11:52:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:34.085 11:52:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:34.085 11:52:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # jq -r .copy 00:06:34.085 11:52:21 accel_rpc.accel_assign_opcode -- accel/accel_rpc.sh@42 -- # grep software 00:06:34.085 11:52:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:34.085 software 00:06:34.085 00:06:34.085 real 0m0.232s 00:06:34.085 user 0m0.042s 00:06:34.085 sys 0m0.010s 00:06:34.085 11:52:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.085 11:52:21 accel_rpc.accel_assign_opcode -- common/autotest_common.sh@10 -- # set +x 00:06:34.085 ************************************ 00:06:34.085 END TEST accel_assign_opcode 00:06:34.085 ************************************ 00:06:34.085 11:52:21 accel_rpc -- common/autotest_common.sh@1142 -- # return 0 00:06:34.085 11:52:21 accel_rpc -- accel/accel_rpc.sh@55 -- # killprocess 164319 00:06:34.085 11:52:21 accel_rpc -- common/autotest_common.sh@948 -- # '[' -z 164319 ']' 00:06:34.085 11:52:21 accel_rpc -- common/autotest_common.sh@952 -- # kill -0 164319 00:06:34.085 11:52:21 accel_rpc -- common/autotest_common.sh@953 -- # uname 00:06:34.085 11:52:21 accel_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:34.085 11:52:21 accel_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 164319 00:06:34.085 11:52:21 accel_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:34.085 11:52:21 accel_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:34.085 11:52:21 accel_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 164319' 00:06:34.085 killing process with pid 164319 00:06:34.085 11:52:21 accel_rpc -- common/autotest_common.sh@967 -- # kill 164319 00:06:34.085 11:52:21 accel_rpc -- common/autotest_common.sh@972 -- # wait 164319 00:06:34.344 00:06:34.344 real 0m1.549s 00:06:34.344 user 0m1.610s 00:06:34.344 sys 0m0.405s 00:06:34.344 11:52:21 accel_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:34.344 11:52:21 accel_rpc -- common/autotest_common.sh@10 -- # set +x 00:06:34.344 ************************************ 00:06:34.344 END TEST accel_rpc 00:06:34.344 ************************************ 00:06:34.604 11:52:21 -- common/autotest_common.sh@1142 -- # return 0 00:06:34.604 11:52:21 -- spdk/autotest.sh@185 -- # run_test app_cmdline /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:34.604 11:52:21 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:34.604 11:52:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:34.604 11:52:21 -- common/autotest_common.sh@10 -- # set +x 00:06:34.604 ************************************ 00:06:34.604 START TEST app_cmdline 00:06:34.604 ************************************ 00:06:34.604 11:52:21 app_cmdline -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/cmdline.sh 00:06:34.604 * Looking for test storage... 00:06:34.604 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:34.604 11:52:21 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:06:34.604 11:52:21 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=164624 00:06:34.604 11:52:21 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 164624 00:06:34.604 11:52:21 app_cmdline -- app/cmdline.sh@16 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:06:34.604 11:52:21 app_cmdline -- common/autotest_common.sh@829 -- # '[' -z 164624 ']' 00:06:34.604 11:52:21 app_cmdline -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:34.604 11:52:21 app_cmdline -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:34.604 11:52:21 app_cmdline -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:34.604 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:34.604 11:52:21 app_cmdline -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:34.604 11:52:21 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:34.604 [2024-07-25 11:52:21.751644] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:34.604 [2024-07-25 11:52:21.751697] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid164624 ] 00:06:34.604 EAL: No free 2048 kB hugepages reported on node 1 00:06:34.604 [2024-07-25 11:52:21.805170] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:34.863 [2024-07-25 11:52:21.887772] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:06:35.432 11:52:22 app_cmdline -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:35.432 11:52:22 app_cmdline -- common/autotest_common.sh@862 -- # return 0 00:06:35.432 11:52:22 app_cmdline -- app/cmdline.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py spdk_get_version 00:06:35.742 { 00:06:35.742 "version": "SPDK v24.09-pre git sha1 58883cba9", 00:06:35.742 "fields": { 00:06:35.742 "major": 24, 00:06:35.742 "minor": 9, 00:06:35.742 "patch": 0, 00:06:35.742 "suffix": "-pre", 00:06:35.742 "commit": "58883cba9" 00:06:35.742 } 00:06:35.742 } 00:06:35.742 11:52:22 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:06:35.742 11:52:22 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:06:35.742 11:52:22 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:06:35.743 11:52:22 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:06:35.743 11:52:22 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:06:35.743 11:52:22 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:06:35.743 11:52:22 app_cmdline -- app/cmdline.sh@26 -- # sort 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:35.743 11:52:22 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:06:35.743 11:52:22 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:06:35.743 11:52:22 app_cmdline -- app/cmdline.sh@30 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@648 -- # local es=0 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:06:35.743 request: 00:06:35.743 { 00:06:35.743 "method": "env_dpdk_get_mem_stats", 00:06:35.743 "req_id": 1 00:06:35.743 } 00:06:35.743 Got JSON-RPC error response 00:06:35.743 response: 00:06:35.743 { 00:06:35.743 "code": -32601, 00:06:35.743 "message": "Method not found" 00:06:35.743 } 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@651 -- # es=1 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:06:35.743 11:52:22 app_cmdline -- app/cmdline.sh@1 -- # killprocess 164624 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@948 -- # '[' -z 164624 ']' 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@952 -- # kill -0 164624 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@953 -- # uname 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 164624 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@966 -- # echo 'killing process with pid 164624' 00:06:35.743 killing process with pid 164624 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@967 -- # kill 164624 00:06:35.743 11:52:22 app_cmdline -- common/autotest_common.sh@972 -- # wait 164624 00:06:36.311 00:06:36.311 real 0m1.651s 00:06:36.311 user 0m1.959s 00:06:36.311 sys 0m0.419s 00:06:36.311 11:52:23 app_cmdline -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.311 11:52:23 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:06:36.311 ************************************ 00:06:36.311 END TEST app_cmdline 00:06:36.311 ************************************ 00:06:36.311 11:52:23 -- common/autotest_common.sh@1142 -- # return 0 00:06:36.311 11:52:23 -- spdk/autotest.sh@186 -- # run_test version /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:36.311 11:52:23 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:06:36.311 11:52:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.311 11:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:36.311 ************************************ 00:06:36.311 START TEST version 00:06:36.311 ************************************ 00:06:36.311 11:52:23 version -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/version.sh 00:06:36.311 * Looking for test storage... 00:06:36.311 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:06:36.311 11:52:23 version -- app/version.sh@17 -- # get_header_version major 00:06:36.311 11:52:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.311 11:52:23 version -- app/version.sh@14 -- # cut -f2 00:06:36.311 11:52:23 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.311 11:52:23 version -- app/version.sh@17 -- # major=24 00:06:36.311 11:52:23 version -- app/version.sh@18 -- # get_header_version minor 00:06:36.311 11:52:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.311 11:52:23 version -- app/version.sh@14 -- # cut -f2 00:06:36.311 11:52:23 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.311 11:52:23 version -- app/version.sh@18 -- # minor=9 00:06:36.311 11:52:23 version -- app/version.sh@19 -- # get_header_version patch 00:06:36.311 11:52:23 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.311 11:52:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.311 11:52:23 version -- app/version.sh@14 -- # cut -f2 00:06:36.311 11:52:23 version -- app/version.sh@19 -- # patch=0 00:06:36.311 11:52:23 version -- app/version.sh@20 -- # get_header_version suffix 00:06:36.311 11:52:23 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/version.h 00:06:36.311 11:52:23 version -- app/version.sh@14 -- # tr -d '"' 00:06:36.311 11:52:23 version -- app/version.sh@14 -- # cut -f2 00:06:36.311 11:52:23 version -- app/version.sh@20 -- # suffix=-pre 00:06:36.311 11:52:23 version -- app/version.sh@22 -- # version=24.9 00:06:36.311 11:52:23 version -- app/version.sh@25 -- # (( patch != 0 )) 00:06:36.311 11:52:23 version -- app/version.sh@28 -- # version=24.9rc0 00:06:36.311 11:52:23 version -- app/version.sh@30 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:06:36.311 11:52:23 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:06:36.311 11:52:23 version -- app/version.sh@30 -- # py_version=24.9rc0 00:06:36.311 11:52:23 version -- app/version.sh@31 -- # [[ 24.9rc0 == \2\4\.\9\r\c\0 ]] 00:06:36.311 00:06:36.311 real 0m0.155s 00:06:36.311 user 0m0.076s 00:06:36.311 sys 0m0.111s 00:06:36.311 11:52:23 version -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:36.311 11:52:23 version -- common/autotest_common.sh@10 -- # set +x 00:06:36.311 ************************************ 00:06:36.311 END TEST version 00:06:36.311 ************************************ 00:06:36.311 11:52:23 -- common/autotest_common.sh@1142 -- # return 0 00:06:36.311 11:52:23 -- spdk/autotest.sh@188 -- # '[' 0 -eq 1 ']' 00:06:36.311 11:52:23 -- spdk/autotest.sh@198 -- # uname -s 00:06:36.311 11:52:23 -- spdk/autotest.sh@198 -- # [[ Linux == Linux ]] 00:06:36.311 11:52:23 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:36.311 11:52:23 -- spdk/autotest.sh@199 -- # [[ 0 -eq 1 ]] 00:06:36.311 11:52:23 -- spdk/autotest.sh@211 -- # '[' 0 -eq 1 ']' 00:06:36.311 11:52:23 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:06:36.311 11:52:23 -- spdk/autotest.sh@260 -- # timing_exit lib 00:06:36.311 11:52:23 -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:36.311 11:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:36.311 11:52:23 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:06:36.311 11:52:23 -- spdk/autotest.sh@270 -- # '[' 0 -eq 1 ']' 00:06:36.311 11:52:23 -- spdk/autotest.sh@279 -- # '[' 1 -eq 1 ']' 00:06:36.311 11:52:23 -- spdk/autotest.sh@280 -- # export NET_TYPE 00:06:36.311 11:52:23 -- spdk/autotest.sh@283 -- # '[' tcp = rdma ']' 00:06:36.311 11:52:23 -- spdk/autotest.sh@286 -- # '[' tcp = tcp ']' 00:06:36.311 11:52:23 -- spdk/autotest.sh@287 -- # run_test nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:36.311 11:52:23 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:36.311 11:52:23 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.311 11:52:23 -- common/autotest_common.sh@10 -- # set +x 00:06:36.570 ************************************ 00:06:36.570 START TEST nvmf_tcp 00:06:36.570 ************************************ 00:06:36.570 11:52:23 nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf.sh --transport=tcp 00:06:36.570 * Looking for test storage... 00:06:36.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:36.570 11:52:23 nvmf_tcp -- nvmf/nvmf.sh@10 -- # uname -s 00:06:36.570 11:52:23 nvmf_tcp -- nvmf/nvmf.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:36.570 11:52:23 nvmf_tcp -- nvmf/nvmf.sh@14 -- # run_test nvmf_target_core /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:36.570 11:52:23 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:36.570 11:52:23 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.570 11:52:23 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:06:36.570 ************************************ 00:06:36.570 START TEST nvmf_target_core 00:06:36.570 ************************************ 00:06:36.570 11:52:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_core.sh --transport=tcp 00:06:36.570 * Looking for test storage... 00:06:36.570 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:06:36.570 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # uname -s 00:06:36.570 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@10 -- # '[' '!' Linux = Linux ']' 00:06:36.570 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.570 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # uname -s 00:06:36.570 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.570 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.570 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.570 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.570 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.570 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.570 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@5 -- # export PATH 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@47 -- # : 0 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@16 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@18 -- # TEST_ARGS=("$@") 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@20 -- # [[ 0 -eq 0 ]] 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@21 -- # run_test nvmf_abort /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:36.571 11:52:23 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:36.831 ************************************ 00:06:36.831 START TEST nvmf_abort 00:06:36.831 ************************************ 00:06:36.831 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort.sh --transport=tcp 00:06:36.831 * Looking for test storage... 00:06:36.831 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:36.831 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:36.831 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # uname -s 00:06:36.831 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:36.831 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:36.831 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:36.831 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:36.831 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:36.831 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:36.831 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:36.831 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:36.831 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@5 -- # export PATH 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@47 -- # : 0 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@11 -- # MALLOC_BDEV_SIZE=64 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@12 -- # MALLOC_BLOCK_SIZE=4096 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@14 -- # nvmftestinit 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@285 -- # xtrace_disable 00:06:36.832 11:52:23 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # pci_devs=() 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # net_devs=() 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # e810=() 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@296 -- # local -ga e810 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # x722=() 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@297 -- # local -ga x722 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # mlx=() 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@298 -- # local -ga mlx 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:42.112 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:42.113 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:42.113 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:42.113 Found net devices under 0000:86:00.0: cvl_0_0 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:42.113 Found net devices under 0000:86:00.1: cvl_0_1 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@414 -- # is_hw=yes 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:42.113 11:52:28 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:42.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:42.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.166 ms 00:06:42.113 00:06:42.113 --- 10.0.0.2 ping statistics --- 00:06:42.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.113 rtt min/avg/max/mdev = 0.166/0.166/0.166/0.000 ms 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:42.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:42.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.260 ms 00:06:42.113 00:06:42.113 --- 10.0.0.1 ping statistics --- 00:06:42.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:42.113 rtt min/avg/max/mdev = 0.260/0.260/0.260/0.000 ms 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@422 -- # return 0 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@15 -- # nvmfappstart -m 0xE 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@481 -- # nvmfpid=168265 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@482 -- # waitforlisten 168265 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@829 -- # '[' -z 168265 ']' 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:42.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:42.113 11:52:29 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:42.113 [2024-07-25 11:52:29.309701] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:42.113 [2024-07-25 11:52:29.309745] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:42.113 EAL: No free 2048 kB hugepages reported on node 1 00:06:42.373 [2024-07-25 11:52:29.367366] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:42.373 [2024-07-25 11:52:29.448813] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:42.373 [2024-07-25 11:52:29.448853] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:42.373 [2024-07-25 11:52:29.448860] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:42.373 [2024-07-25 11:52:29.448866] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:42.373 [2024-07-25 11:52:29.448872] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:42.373 [2024-07-25 11:52:29.448918] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:42.373 [2024-07-25 11:52:29.449007] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:42.373 [2024-07-25 11:52:29.449008] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:42.942 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:42.942 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@862 -- # return 0 00:06:42.942 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:42.942 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:42.942 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:42.942 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:42.942 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -a 256 00:06:42.942 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.942 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:42.942 [2024-07-25 11:52:30.169163] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:42.942 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:42.942 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@20 -- # rpc_cmd bdev_malloc_create 64 4096 -b Malloc0 00:06:42.943 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:42.943 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.203 Malloc0 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@21 -- # rpc_cmd bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.203 Delay0 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 Delay0 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.203 [2024-07-25 11:52:30.237716] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:43.203 11:52:30 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0x1 -t 1 -l warning -q 128 00:06:43.203 EAL: No free 2048 kB hugepages reported on node 1 00:06:43.203 [2024-07-25 11:52:30.309125] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:06:45.808 Initializing NVMe Controllers 00:06:45.808 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:06:45.808 controller IO queue size 128 less than required 00:06:45.808 Consider using lower queue depth or small IO size because IO requests may be queued at the NVMe driver. 00:06:45.808 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 0 00:06:45.808 Initialization complete. Launching workers. 00:06:45.808 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 I/O completed: 123, failed: 41983 00:06:45.808 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) abort submitted 42044, failed to submit 62 00:06:45.808 success 41987, unsuccess 57, failed 0 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@34 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@36 -- # trap - SIGINT SIGTERM EXIT 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- target/abort.sh@38 -- # nvmftestfini 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@488 -- # nvmfcleanup 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@117 -- # sync 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@120 -- # set +e 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@121 -- # for i in {1..20} 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:06:45.808 rmmod nvme_tcp 00:06:45.808 rmmod nvme_fabrics 00:06:45.808 rmmod nvme_keyring 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@124 -- # set -e 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@125 -- # return 0 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@489 -- # '[' -n 168265 ']' 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@490 -- # killprocess 168265 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@948 -- # '[' -z 168265 ']' 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@952 -- # kill -0 168265 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # uname 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 168265 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 168265' 00:06:45.808 killing process with pid 168265 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@967 -- # kill 168265 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@972 -- # wait 168265 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@278 -- # remove_spdk_ns 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:45.808 11:52:32 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.718 11:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:06:47.718 00:06:47.718 real 0m11.105s 00:06:47.718 user 0m13.687s 00:06:47.718 sys 0m4.832s 00:06:47.718 11:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:06:47.718 11:52:34 nvmf_tcp.nvmf_target_core.nvmf_abort -- common/autotest_common.sh@10 -- # set +x 00:06:47.718 ************************************ 00:06:47.718 END TEST nvmf_abort 00:06:47.718 ************************************ 00:06:47.977 11:52:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:06:47.977 11:52:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@22 -- # run_test nvmf_ns_hotplug_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:47.977 11:52:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:06:47.977 11:52:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:06:47.977 11:52:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:06:47.977 ************************************ 00:06:47.977 START TEST nvmf_ns_hotplug_stress 00:06:47.977 ************************************ 00:06:47.977 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh --transport=tcp 00:06:47.977 * Looking for test storage... 00:06:47.977 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:06:47.977 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:06:47.977 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # uname -s 00:06:47.977 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:06:47.977 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:06:47.977 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:06:47.977 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:06:47.977 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:06:47.977 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:06:47.977 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:06:47.977 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:06:47.977 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:06:47.977 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:06:47.977 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@5 -- # export PATH 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@47 -- # : 0 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@22 -- # nvmftestinit 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:06:47.978 11:52:35 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # net_devs=() 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # e810=() 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@296 -- # local -ga e810 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # x722=() 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@297 -- # local -ga x722 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # mlx=() 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:06:53.259 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:06:53.260 Found 0000:86:00.0 (0x8086 - 0x159b) 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:06:53.260 Found 0000:86:00.1 (0x8086 - 0x159b) 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:06:53.260 Found net devices under 0000:86:00.0: cvl_0_0 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:06:53.260 Found net devices under 0000:86:00.1: cvl_0_1 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:06:53.260 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:06:53.520 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:06:53.520 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.165 ms 00:06:53.520 00:06:53.520 --- 10.0.0.2 ping statistics --- 00:06:53.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.520 rtt min/avg/max/mdev = 0.165/0.165/0.165/0.000 ms 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:06:53.520 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:06:53.520 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.230 ms 00:06:53.520 00:06:53.520 --- 10.0.0.1 ping statistics --- 00:06:53.520 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:06:53.520 rtt min/avg/max/mdev = 0.230/0.230/0.230/0.000 ms 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@422 -- # return 0 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@23 -- # nvmfappstart -m 0xE 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@481 -- # nvmfpid=172283 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@482 -- # waitforlisten 172283 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@829 -- # '[' -z 172283 ']' 00:06:53.520 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:53.521 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:06:53.521 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:53.521 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:53.521 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:06:53.521 11:52:40 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:53.521 [2024-07-25 11:52:40.693516] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:06:53.521 [2024-07-25 11:52:40.693556] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:53.521 EAL: No free 2048 kB hugepages reported on node 1 00:06:53.521 [2024-07-25 11:52:40.750109] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:53.781 [2024-07-25 11:52:40.830616] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:06:53.781 [2024-07-25 11:52:40.830657] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:06:53.781 [2024-07-25 11:52:40.830665] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:06:53.781 [2024-07-25 11:52:40.830671] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:06:53.781 [2024-07-25 11:52:40.830675] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:06:53.781 [2024-07-25 11:52:40.830778] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:06:53.781 [2024-07-25 11:52:40.830881] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:06:53.781 [2024-07-25 11:52:40.830883] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:06:54.350 11:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:06:54.350 11:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@862 -- # return 0 00:06:54.350 11:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:06:54.350 11:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:06:54.350 11:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:06:54.350 11:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:06:54.350 11:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@25 -- # null_size=1000 00:06:54.350 11:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:06:54.609 [2024-07-25 11:52:41.703679] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:06:54.609 11:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:06:54.869 11:52:41 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:06:54.869 [2024-07-25 11:52:42.074197] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:06:54.869 11:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:06:55.129 11:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 512 -b Malloc0 00:06:55.388 Malloc0 00:06:55.388 11:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_delay_create -b Malloc0 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:06:55.388 Delay0 00:06:55.647 11:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:55.647 11:52:42 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create NULL1 1000 512 00:06:55.906 NULL1 00:06:55.906 11:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:06:56.166 11:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@42 -- # PERF_PID=172767 00:06:56.166 11:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 30 -q 128 -w randread -o 512 -Q 1000 00:06:56.166 11:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:06:56.166 11:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.167 EAL: No free 2048 kB hugepages reported on node 1 00:06:56.167 11:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.426 11:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1001 00:06:56.426 11:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1001 00:06:56.686 true 00:06:56.686 11:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:06:56.686 11:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:56.946 11:52:43 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:56.946 11:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1002 00:06:56.946 11:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1002 00:06:57.205 true 00:06:57.205 11:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:06:57.205 11:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.465 11:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:57.724 11:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1003 00:06:57.724 11:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1003 00:06:57.724 true 00:06:57.724 11:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:06:57.724 11:52:44 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:57.984 11:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.244 11:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1004 00:06:58.244 11:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1004 00:06:58.244 true 00:06:58.244 11:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:06:58.244 11:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:58.504 11:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:58.764 11:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1005 00:06:58.764 11:52:45 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1005 00:06:59.024 true 00:06:59.024 11:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:06:59.024 11:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.024 11:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.284 11:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1006 00:06:59.284 11:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1006 00:06:59.544 true 00:06:59.544 11:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:06:59.544 11:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:06:59.803 11:52:46 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:06:59.803 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1007 00:06:59.804 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1007 00:07:00.063 true 00:07:00.063 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:00.063 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.322 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:00.582 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1008 00:07:00.582 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1008 00:07:00.582 true 00:07:00.582 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:00.582 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:00.842 11:52:47 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.101 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1009 00:07:01.101 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1009 00:07:01.101 true 00:07:01.361 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:01.361 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:01.361 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:01.621 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1010 00:07:01.621 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1010 00:07:01.881 true 00:07:01.881 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:01.881 11:52:48 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.140 11:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.140 11:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1011 00:07:02.140 11:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1011 00:07:02.401 true 00:07:02.401 11:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:02.401 11:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:02.661 11:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:02.923 11:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1012 00:07:02.923 11:52:49 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1012 00:07:02.923 true 00:07:02.923 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:02.923 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.220 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.480 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1013 00:07:03.480 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1013 00:07:03.480 true 00:07:03.480 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:03.480 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:03.739 11:52:50 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:03.999 11:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1014 00:07:03.999 11:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1014 00:07:04.258 true 00:07:04.258 11:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:04.258 11:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:04.258 11:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:04.517 11:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1015 00:07:04.517 11:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1015 00:07:04.776 true 00:07:04.776 11:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:04.776 11:52:51 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.036 11:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.296 11:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1016 00:07:05.296 11:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1016 00:07:05.556 true 00:07:05.556 11:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:05.556 11:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:05.556 11:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:05.815 11:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1017 00:07:05.815 11:52:52 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1017 00:07:06.075 true 00:07:06.075 11:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:06.075 11:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.335 11:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:06.335 11:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1018 00:07:06.335 11:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1018 00:07:06.594 true 00:07:06.594 11:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:06.594 11:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:06.853 11:52:53 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.113 11:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1019 00:07:07.113 11:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1019 00:07:07.113 true 00:07:07.113 11:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:07.113 11:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.373 11:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:07.633 11:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1020 00:07:07.633 11:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1020 00:07:07.894 true 00:07:07.894 11:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:07.894 11:52:54 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:07.894 11:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.155 11:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1021 00:07:08.155 11:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1021 00:07:08.414 true 00:07:08.414 11:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:08.414 11:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:08.674 11:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:08.934 11:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1022 00:07:08.934 11:52:55 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1022 00:07:08.934 true 00:07:08.934 11:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:08.934 11:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.194 11:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.453 11:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1023 00:07:09.453 11:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1023 00:07:09.712 true 00:07:09.712 11:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:09.712 11:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:09.971 11:52:56 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:09.971 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1024 00:07:09.971 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1024 00:07:10.231 true 00:07:10.231 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:10.231 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:10.491 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:10.750 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1025 00:07:10.750 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1025 00:07:10.750 true 00:07:10.750 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:10.750 11:52:57 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.010 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.270 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1026 00:07:11.270 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1026 00:07:11.270 true 00:07:11.530 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:11.530 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:11.530 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:11.789 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1027 00:07:11.789 11:52:58 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1027 00:07:12.049 true 00:07:12.049 11:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:12.049 11:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.309 11:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:12.309 11:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1028 00:07:12.309 11:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1028 00:07:12.569 true 00:07:12.569 11:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:12.569 11:52:59 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:12.829 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.088 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1029 00:07:13.088 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1029 00:07:13.347 true 00:07:13.347 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:13.347 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:13.606 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:13.607 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1030 00:07:13.607 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1030 00:07:13.866 true 00:07:13.866 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:13.866 11:53:00 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.126 11:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.126 11:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1031 00:07:14.126 11:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1031 00:07:14.385 true 00:07:14.385 11:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:14.385 11:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:14.644 11:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:14.905 11:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1032 00:07:14.905 11:53:01 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1032 00:07:14.905 true 00:07:15.165 11:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:15.165 11:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.165 11:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:15.424 11:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1033 00:07:15.424 11:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1033 00:07:15.684 true 00:07:15.684 11:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:15.684 11:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:15.945 11:53:02 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.206 11:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1034 00:07:16.206 11:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1034 00:07:16.206 true 00:07:16.206 11:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:16.206 11:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:16.466 11:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:16.728 11:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1035 00:07:16.728 11:53:03 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1035 00:07:17.064 true 00:07:17.064 11:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:17.064 11:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.064 11:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.323 11:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1036 00:07:17.323 11:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1036 00:07:17.583 true 00:07:17.583 11:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:17.583 11:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:17.583 11:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:17.843 11:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1037 00:07:17.843 11:53:04 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1037 00:07:18.102 true 00:07:18.103 11:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:18.103 11:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.363 11:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:18.363 11:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1038 00:07:18.363 11:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1038 00:07:18.623 true 00:07:18.623 11:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:18.623 11:53:05 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:18.883 11:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.144 11:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1039 00:07:19.144 11:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1039 00:07:19.404 true 00:07:19.404 11:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:19.404 11:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:19.664 11:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:19.664 11:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1040 00:07:19.664 11:53:06 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1040 00:07:19.925 true 00:07:19.925 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:19.925 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.185 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.445 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1041 00:07:20.445 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1041 00:07:20.445 true 00:07:20.445 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:20.445 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:20.705 11:53:07 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:20.966 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1042 00:07:20.966 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1042 00:07:21.226 true 00:07:21.226 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:21.226 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:21.486 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:21.486 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1043 00:07:21.486 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1043 00:07:21.746 true 00:07:21.746 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:21.746 11:53:08 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.006 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.267 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1044 00:07:22.267 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1044 00:07:22.267 true 00:07:22.267 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:22.267 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:22.527 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:22.788 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1045 00:07:22.788 11:53:09 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1045 00:07:23.049 true 00:07:23.049 11:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:23.049 11:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.049 11:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.310 11:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1046 00:07:23.310 11:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1046 00:07:23.569 true 00:07:23.569 11:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:23.569 11:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:23.829 11:53:10 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:23.829 11:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1047 00:07:23.829 11:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1047 00:07:24.089 true 00:07:24.089 11:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:24.089 11:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.348 11:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:24.608 11:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1048 00:07:24.608 11:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1048 00:07:24.608 true 00:07:24.608 11:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:24.608 11:53:11 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:24.867 11:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.127 11:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1049 00:07:25.127 11:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1049 00:07:25.387 true 00:07:25.388 11:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:25.388 11:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:25.388 11:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:25.647 11:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1050 00:07:25.647 11:53:12 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1050 00:07:25.907 true 00:07:25.907 11:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:25.907 11:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.167 11:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:26.427 Initializing NVMe Controllers 00:07:26.427 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:26.427 Controller IO queue size 128, less than required. 00:07:26.427 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:26.427 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:07:26.427 Initialization complete. Launching workers. 00:07:26.427 ======================================================== 00:07:26.427 Latency(us) 00:07:26.427 Device Information : IOPS MiB/s Average min max 00:07:26.427 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 26817.67 13.09 4772.96 2533.96 11781.43 00:07:26.427 ======================================================== 00:07:26.427 Total : 26817.67 13.09 4772.96 2533.96 11781.43 00:07:26.427 00:07:26.427 11:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@49 -- # null_size=1051 00:07:26.427 11:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_resize NULL1 1051 00:07:26.427 true 00:07:26.427 11:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@44 -- # kill -0 172767 00:07:26.427 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/ns_hotplug_stress.sh: line 44: kill: (172767) - No such process 00:07:26.427 11:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@53 -- # wait 172767 00:07:26.427 11:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:26.687 11:53:13 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:26.947 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # nthreads=8 00:07:26.947 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@58 -- # pids=() 00:07:26.947 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i = 0 )) 00:07:26.947 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:26.947 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null0 100 4096 00:07:26.947 null0 00:07:26.947 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:26.947 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:27.207 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null1 100 4096 00:07:27.207 null1 00:07:27.207 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:27.207 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:27.207 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null2 100 4096 00:07:27.467 null2 00:07:27.467 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:27.467 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:27.467 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null3 100 4096 00:07:27.467 null3 00:07:27.727 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:27.727 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:27.727 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null4 100 4096 00:07:27.727 null4 00:07:27.727 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:27.727 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:27.727 11:53:14 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null5 100 4096 00:07:27.987 null5 00:07:27.987 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:27.987 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:27.987 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null6 100 4096 00:07:28.247 null6 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_null_create null7 100 4096 00:07:28.247 null7 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( ++i )) 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@59 -- # (( i < nthreads )) 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i = 0 )) 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 1 null0 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=1 bdev=null0 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 2 null1 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=2 bdev=null1 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:28.247 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 3 null2 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=3 bdev=null2 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 4 null3 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=4 bdev=null3 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 5 null4 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=5 bdev=null4 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 6 null5 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=6 bdev=null5 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 7 null6 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=7 bdev=null6 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@64 -- # pids+=($!) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( ++i )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@62 -- # (( i < nthreads )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@66 -- # wait 178220 178221 178223 178225 178228 178231 178233 178236 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@63 -- # add_remove 8 null7 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@14 -- # local nsid=8 bdev=null7 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i = 0 )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.248 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:28.507 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:28.507 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:28.507 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:28.507 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:28.507 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:28.507 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:28.507 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:28.507 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:28.767 11:53:15 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:29.026 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.027 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:29.287 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:29.287 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:29.287 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.287 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.287 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:29.287 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:29.287 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:29.287 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:29.548 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:29.809 11:53:16 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.809 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:29.809 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.809 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.809 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:29.809 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:29.809 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:29.809 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:30.069 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:30.069 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:30.069 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:30.069 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:30.069 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:30.069 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:30.069 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:30.069 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:30.329 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:30.330 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:30.330 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.637 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:30.926 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:30.926 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:30.926 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:30.926 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:30.926 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:30.926 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:30.926 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:30.926 11:53:17 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:30.926 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.926 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.926 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:30.926 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.926 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.926 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:30.926 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.926 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.927 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:30.927 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.927 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.927 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:30.927 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.927 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.927 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:30.927 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.927 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.927 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:30.927 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.927 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.927 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:30.927 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:30.927 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:30.927 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:31.187 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.187 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:31.187 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:31.187 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:31.187 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:31.187 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.187 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:31.187 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.447 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 4 nqn.2016-06.io.spdk:cnode1 null3 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 5 nqn.2016-06.io.spdk:cnode1 null4 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 6 nqn.2016-06.io.spdk:cnode1 null5 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 8 nqn.2016-06.io.spdk:cnode1 null7 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 2 nqn.2016-06.io.spdk:cnode1 null1 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 1 nqn.2016-06.io.spdk:cnode1 null0 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 7 nqn.2016-06.io.spdk:cnode1 null6 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:31.707 11:53:18 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@17 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns -n 3 nqn.2016-06.io.spdk:cnode1 null2 00:07:31.967 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 6 00:07:31.967 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 8 00:07:31.967 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:07:31.967 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 4 00:07:31.967 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 7 00:07:31.968 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:07:31.968 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:07:31.968 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 3 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( ++i )) 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@16 -- # (( i < 10 )) 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- target/ns_hotplug_stress.sh@70 -- # nvmftestfini 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@117 -- # sync 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@120 -- # set +e 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:32.228 rmmod nvme_tcp 00:07:32.228 rmmod nvme_fabrics 00:07:32.228 rmmod nvme_keyring 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@124 -- # set -e 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@125 -- # return 0 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@489 -- # '[' -n 172283 ']' 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@490 -- # killprocess 172283 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@948 -- # '[' -z 172283 ']' 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@952 -- # kill -0 172283 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # uname 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 172283 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 172283' 00:07:32.228 killing process with pid 172283 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@967 -- # kill 172283 00:07:32.228 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@972 -- # wait 172283 00:07:32.489 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:32.489 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:32.489 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:32.489 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:32.489 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:32.489 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:32.489 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:32.489 11:53:19 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:35.030 00:07:35.030 real 0m46.666s 00:07:35.030 user 3m17.502s 00:07:35.030 sys 0m17.054s 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_ns_hotplug_stress -- common/autotest_common.sh@10 -- # set +x 00:07:35.030 ************************************ 00:07:35.030 END TEST nvmf_ns_hotplug_stress 00:07:35.030 ************************************ 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@23 -- # run_test nvmf_delete_subsystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:35.030 ************************************ 00:07:35.030 START TEST nvmf_delete_subsystem 00:07:35.030 ************************************ 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh --transport=tcp 00:07:35.030 * Looking for test storage... 00:07:35.030 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # uname -s 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@5 -- # export PATH 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@47 -- # : 0 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@12 -- # nvmftestinit 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:35.030 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:35.031 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:35.031 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:35.031 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:35.031 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:35.031 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@285 -- # xtrace_disable 00:07:35.031 11:53:21 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # pci_devs=() 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # net_devs=() 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # e810=() 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@296 -- # local -ga e810 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # x722=() 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@297 -- # local -ga x722 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # mlx=() 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@298 -- # local -ga mlx 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:40.314 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:40.314 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:40.314 Found net devices under 0000:86:00.0: cvl_0_0 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:40.314 Found net devices under 0000:86:00.1: cvl_0_1 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@414 -- # is_hw=yes 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:40.314 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:40.315 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:40.315 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:07:40.315 00:07:40.315 --- 10.0.0.2 ping statistics --- 00:07:40.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.315 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:40.315 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:40.315 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:07:40.315 00:07:40.315 --- 10.0.0.1 ping statistics --- 00:07:40.315 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:40.315 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@422 -- # return 0 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@13 -- # nvmfappstart -m 0x3 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@481 -- # nvmfpid=182593 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@482 -- # waitforlisten 182593 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@829 -- # '[' -z 182593 ']' 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:40.315 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:40.315 11:53:27 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:40.574 [2024-07-25 11:53:27.578197] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:07:40.575 [2024-07-25 11:53:27.578239] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:40.575 EAL: No free 2048 kB hugepages reported on node 1 00:07:40.575 [2024-07-25 11:53:27.634527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:07:40.575 [2024-07-25 11:53:27.712609] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:40.575 [2024-07-25 11:53:27.712648] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:40.575 [2024-07-25 11:53:27.712655] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:40.575 [2024-07-25 11:53:27.712661] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:40.575 [2024-07-25 11:53:27.712666] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:40.575 [2024-07-25 11:53:27.712733] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:40.575 [2024-07-25 11:53:27.712735] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:41.143 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:41.143 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@862 -- # return 0 00:07:41.143 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:41.143 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:41.143 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.402 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.403 [2024-07-25 11:53:28.416815] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.403 [2024-07-25 11:53:28.432972] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.403 NULL1 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@23 -- # rpc_cmd bdev_delay_create -b NULL1 -d Delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.403 Delay0 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@28 -- # perf_pid=182837 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@30 -- # sleep 2 00:07:41.403 11:53:28 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 5 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:41.403 EAL: No free 2048 kB hugepages reported on node 1 00:07:41.403 [2024-07-25 11:53:28.517573] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:43.309 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@32 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:07:43.309 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:43.309 11:53:30 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 [2024-07-25 11:53:30.695615] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71710 is same with the state(5) to be set 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.568 Write completed with error (sct=0, sc=8) 00:07:43.568 starting I/O failed: -6 00:07:43.568 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 starting I/O failed: -6 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 starting I/O failed: -6 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 starting I/O failed: -6 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 starting I/O failed: -6 00:07:43.569 [2024-07-25 11:53:30.695979] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f49dc00d000 is same with the state(5) to be set 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Write completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:43.569 Read completed with error (sct=0, sc=8) 00:07:44.504 [2024-07-25 11:53:31.657968] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d72ac0 is same with the state(5) to be set 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 [2024-07-25 11:53:31.696504] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x7f49dc00d330 is same with the state(5) to be set 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 [2024-07-25 11:53:31.698001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71000 is same with the state(5) to be set 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 [2024-07-25 11:53:31.698156] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d713e0 is same with the state(5) to be set 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 Read completed with error (sct=0, sc=8) 00:07:44.504 Write completed with error (sct=0, sc=8) 00:07:44.504 [2024-07-25 11:53:31.698294] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1d71a40 is same with the state(5) to be set 00:07:44.504 Initializing NVMe Controllers 00:07:44.504 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:44.504 Controller IO queue size 128, less than required. 00:07:44.504 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:44.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:44.504 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:44.504 Initialization complete. Launching workers. 00:07:44.504 ======================================================== 00:07:44.504 Latency(us) 00:07:44.504 Device Information : IOPS MiB/s Average min max 00:07:44.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 187.13 0.09 951873.54 967.41 1012391.54 00:07:44.504 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 152.88 0.07 885943.40 230.31 1012375.87 00:07:44.504 ======================================================== 00:07:44.504 Total : 340.01 0.17 922229.04 230.31 1012391.54 00:07:44.504 00:07:44.504 [2024-07-25 11:53:31.698990] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1d72ac0 (9): Bad file descriptor 00:07:44.504 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf: errors occurred 00:07:44.504 11:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:44.504 11:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@34 -- # delay=0 00:07:44.504 11:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 182837 00:07:44.504 11:53:31 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@36 -- # sleep 0.5 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@38 -- # (( delay++ > 30 )) 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@35 -- # kill -0 182837 00:07:45.070 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 35: kill: (182837) - No such process 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@45 -- # NOT wait 182837 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@648 -- # local es=0 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@650 -- # valid_exec_arg wait 182837 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@636 -- # local arg=wait 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # type -t wait 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # wait 182837 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@651 -- # es=1 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.070 [2024-07-25 11:53:32.227723] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@50 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Delay0 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@54 -- # perf_pid=183456 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@56 -- # delay=0 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -c 0xC -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -t 3 -q 128 -w randrw -M 70 -o 512 -P 4 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 183456 00:07:45.070 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:45.070 EAL: No free 2048 kB hugepages reported on node 1 00:07:45.070 [2024-07-25 11:53:32.290706] subsystem.c:1572:spdk_nvmf_subsystem_listener_allowed: *WARNING*: Allowing connection to discovery subsystem on TCP/10.0.0.2/4420, even though this listener was not added to the discovery subsystem. This behavior is deprecated and will be removed in a future release. 00:07:45.635 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:45.635 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 183456 00:07:45.635 11:53:32 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.210 11:53:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.210 11:53:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 183456 00:07:46.210 11:53:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:46.779 11:53:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:46.779 11:53:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 183456 00:07:46.779 11:53:33 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:47.039 11:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:47.039 11:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 183456 00:07:47.039 11:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:47.607 11:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:47.607 11:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 183456 00:07:47.607 11:53:34 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:48.174 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:48.174 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 183456 00:07:48.174 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@58 -- # sleep 0.5 00:07:48.433 Initializing NVMe Controllers 00:07:48.433 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:07:48.433 Controller IO queue size 128, less than required. 00:07:48.434 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:07:48.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 2 00:07:48.434 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 3 00:07:48.434 Initialization complete. Launching workers. 00:07:48.434 ======================================================== 00:07:48.434 Latency(us) 00:07:48.434 Device Information : IOPS MiB/s Average min max 00:07:48.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 2: 128.00 0.06 1004062.53 1000423.13 1041434.56 00:07:48.434 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 3: 128.00 0.06 1005699.10 1000553.84 1013112.09 00:07:48.434 ======================================================== 00:07:48.434 Total : 256.00 0.12 1004880.82 1000423.13 1041434.56 00:07:48.434 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@60 -- # (( delay++ > 20 )) 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@57 -- # kill -0 183456 00:07:48.693 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/delete_subsystem.sh: line 57: kill: (183456) - No such process 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@67 -- # wait 183456 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- target/delete_subsystem.sh@71 -- # nvmftestfini 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@117 -- # sync 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@120 -- # set +e 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:07:48.693 rmmod nvme_tcp 00:07:48.693 rmmod nvme_fabrics 00:07:48.693 rmmod nvme_keyring 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@124 -- # set -e 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@125 -- # return 0 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@489 -- # '[' -n 182593 ']' 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@490 -- # killprocess 182593 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@948 -- # '[' -z 182593 ']' 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@952 -- # kill -0 182593 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # uname 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 182593 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@966 -- # echo 'killing process with pid 182593' 00:07:48.693 killing process with pid 182593 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@967 -- # kill 182593 00:07:48.693 11:53:35 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@972 -- # wait 182593 00:07:48.953 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:07:48.953 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:07:48.953 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:07:48.953 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:07:48.953 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:07:48.953 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:48.953 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:48.953 11:53:36 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:07:51.035 00:07:51.035 real 0m16.382s 00:07:51.035 user 0m30.541s 00:07:51.035 sys 0m5.072s 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_delete_subsystem -- common/autotest_common.sh@10 -- # set +x 00:07:51.035 ************************************ 00:07:51.035 END TEST nvmf_delete_subsystem 00:07:51.035 ************************************ 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@26 -- # run_test nvmf_host_management /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:07:51.035 ************************************ 00:07:51.035 START TEST nvmf_host_management 00:07:51.035 ************************************ 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh --transport=tcp 00:07:51.035 * Looking for test storage... 00:07:51.035 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # uname -s 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:07:51.035 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@5 -- # export PATH 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@47 -- # : 0 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@51 -- # have_pci_nics=0 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@11 -- # MALLOC_BDEV_SIZE=64 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@105 -- # nvmftestinit 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@448 -- # prepare_net_devs 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@410 -- # local -g is_hw=no 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@412 -- # remove_spdk_ns 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@285 -- # xtrace_disable 00:07:51.295 11:53:38 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.577 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:07:56.577 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # pci_devs=() 00:07:56.577 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@291 -- # local -a pci_devs 00:07:56.577 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # pci_net_devs=() 00:07:56.577 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:07:56.577 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # pci_drivers=() 00:07:56.577 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@293 -- # local -A pci_drivers 00:07:56.577 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # net_devs=() 00:07:56.577 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@295 -- # local -ga net_devs 00:07:56.577 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # e810=() 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@296 -- # local -ga e810 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # x722=() 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@297 -- # local -ga x722 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # mlx=() 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@298 -- # local -ga mlx 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:07:56.578 Found 0000:86:00.0 (0x8086 - 0x159b) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:07:56.578 Found 0000:86:00.1 (0x8086 - 0x159b) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:07:56.578 Found net devices under 0000:86:00.0: cvl_0_0 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@390 -- # [[ up == up ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:07:56.578 Found net devices under 0000:86:00.1: cvl_0_1 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@414 -- # is_hw=yes 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:07:56.578 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:07:56.578 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:07:56.578 00:07:56.578 --- 10.0.0.2 ping statistics --- 00:07:56.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.578 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:07:56.578 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:07:56.578 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.423 ms 00:07:56.578 00:07:56.578 --- 10.0.0.1 ping statistics --- 00:07:56.578 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:07:56.578 rtt min/avg/max/mdev = 0.423/0.423/0.423/0.000 ms 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@422 -- # return 0 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@107 -- # nvmf_host_management 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@69 -- # starttarget 00:07:56.578 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@16 -- # nvmfappstart -m 0x1E 00:07:56.579 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:07:56.579 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:56.579 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.579 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@481 -- # nvmfpid=187532 00:07:56.579 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@482 -- # waitforlisten 187532 00:07:56.579 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:07:56.579 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 187532 ']' 00:07:56.579 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:56.579 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:56.579 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:56.579 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:56.579 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:56.579 11:53:43 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:56.838 [2024-07-25 11:53:43.835384] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:07:56.838 [2024-07-25 11:53:43.835427] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:07:56.838 EAL: No free 2048 kB hugepages reported on node 1 00:07:56.838 [2024-07-25 11:53:43.893289] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:07:56.838 [2024-07-25 11:53:43.975353] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:07:56.838 [2024-07-25 11:53:43.975386] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:07:56.838 [2024-07-25 11:53:43.975393] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:07:56.838 [2024-07-25 11:53:43.975400] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:07:56.838 [2024-07-25 11:53:43.975404] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:07:56.838 [2024-07-25 11:53:43.975498] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:07:56.838 [2024-07-25 11:53:43.975604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:07:56.838 [2024-07-25 11:53:43.975710] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:07:56.838 [2024-07-25 11:53:43.975712] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:07:57.408 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:57.408 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:57.408 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:07:57.408 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:57.408 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 [2024-07-25 11:53:44.679213] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@20 -- # timing_enter create_subsystem 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@722 -- # xtrace_disable 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@22 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@23 -- # cat 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@30 -- # rpc_cmd 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 Malloc0 00:07:57.669 [2024-07-25 11:53:44.738849] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@31 -- # timing_exit create_subsystems 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@728 -- # xtrace_disable 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@73 -- # perfpid=187803 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@74 -- # waitforlisten 187803 /var/tmp/bdevperf.sock 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@829 -- # '[' -z 187803 ']' 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@72 -- # gen_nvmf_target_json 0 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@834 -- # local max_retries=100 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:07:57.669 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@838 -- # xtrace_disable 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:57.669 { 00:07:57.669 "params": { 00:07:57.669 "name": "Nvme$subsystem", 00:07:57.669 "trtype": "$TEST_TRANSPORT", 00:07:57.669 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:57.669 "adrfam": "ipv4", 00:07:57.669 "trsvcid": "$NVMF_PORT", 00:07:57.669 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:57.669 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:57.669 "hdgst": ${hdgst:-false}, 00:07:57.669 "ddgst": ${ddgst:-false} 00:07:57.669 }, 00:07:57.669 "method": "bdev_nvme_attach_controller" 00:07:57.669 } 00:07:57.669 EOF 00:07:57.669 )") 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:57.669 11:53:44 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:57.669 "params": { 00:07:57.669 "name": "Nvme0", 00:07:57.669 "trtype": "tcp", 00:07:57.669 "traddr": "10.0.0.2", 00:07:57.669 "adrfam": "ipv4", 00:07:57.669 "trsvcid": "4420", 00:07:57.669 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:57.669 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:57.669 "hdgst": false, 00:07:57.669 "ddgst": false 00:07:57.669 }, 00:07:57.669 "method": "bdev_nvme_attach_controller" 00:07:57.669 }' 00:07:57.669 [2024-07-25 11:53:44.832666] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:07:57.669 [2024-07-25 11:53:44.832715] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid187803 ] 00:07:57.669 EAL: No free 2048 kB hugepages reported on node 1 00:07:57.669 [2024-07-25 11:53:44.887197] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:57.930 [2024-07-25 11:53:44.962173] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:07:57.930 Running I/O for 10 seconds... 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@862 -- # return 0 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@75 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@78 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@80 -- # waitforio /var/tmp/bdevperf.sock Nvme0n1 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@45 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@49 -- # '[' -z Nvme0n1 ']' 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@52 -- # local ret=1 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@53 -- # local i 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i = 10 )) 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@54 -- # (( i != 0 )) 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme0n1 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # jq -r '.bdevs[0].num_read_ops' 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@55 -- # read_io_count=451 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@58 -- # '[' 451 -ge 100 ']' 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@59 -- # ret=0 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@60 -- # break 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@64 -- # return 0 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@84 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.502 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.502 [2024-07-25 11:53:45.714051] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714092] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714100] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714106] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714113] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714124] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714130] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714136] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714142] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714148] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714154] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714178] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714190] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714225] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714231] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714255] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.502 [2024-07-25 11:53:45.714310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714322] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714341] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714347] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714353] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714359] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714365] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714384] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714389] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714395] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714401] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714407] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714413] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714418] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.714424] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xa00580 is same with the state(5) to be set 00:07:58.503 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.503 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@85 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host0 00:07:58.503 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@559 -- # xtrace_disable 00:07:58.503 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:07:58.503 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:07:58.503 11:53:45 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@87 -- # sleep 1 00:07:58.503 [2024-07-25 11:53:45.728738] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:07:58.503 [2024-07-25 11:53:45.728774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.728784] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:07:58.503 [2024-07-25 11:53:45.728792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.728800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:07:58.503 [2024-07-25 11:53:45.728807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.728814] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:07:58.503 [2024-07-25 11:53:45.728821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.728828] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1ac7980 is same with the state(5) to be set 00:07:58.503 [2024-07-25 11:53:45.728910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:65408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.728919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.728932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:65536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.728939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.728947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:65664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.728954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.728962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:65792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.728968] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.728977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:65920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.728984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.728991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:66048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.728998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.729006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:66176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.729013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.729021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:66304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.729032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.729040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:66432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.729086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.729094] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:66560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.729101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.729109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:66688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.729116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.729124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:66816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.729131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.729139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:66944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.729145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.729153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:67072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.729159] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.729169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:67200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.729176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.729184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:67328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.729190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.729198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:67456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.729204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.729213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:67584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.729220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.503 [2024-07-25 11:53:45.729230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:67712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.503 [2024-07-25 11:53:45.729236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:67840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:67968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:68096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:68224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:68352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:68480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:68608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:68736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:68864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:68992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:69120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:69248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:69376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:69504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:69632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:69760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:69888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:70016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:70144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:70272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:70400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:70528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:70656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:70784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:70912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:71040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:71168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:71296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:71424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:71552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:71680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:71808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:71936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:72064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:72192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:72320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:72448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.504 [2024-07-25 11:53:45.729816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:72576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.504 [2024-07-25 11:53:45.729822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.505 [2024-07-25 11:53:45.729830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:72704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.505 [2024-07-25 11:53:45.729838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.505 [2024-07-25 11:53:45.729846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:72832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.505 [2024-07-25 11:53:45.729852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.505 [2024-07-25 11:53:45.729860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:72960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.505 [2024-07-25 11:53:45.729866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.505 [2024-07-25 11:53:45.729874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:73088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.505 [2024-07-25 11:53:45.729880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.505 [2024-07-25 11:53:45.729888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:73216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.505 [2024-07-25 11:53:45.729895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.505 [2024-07-25 11:53:45.729904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:73344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.505 [2024-07-25 11:53:45.729910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.505 [2024-07-25 11:53:45.729918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:73472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:07:58.505 [2024-07-25 11:53:45.729924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:07:58.505 [2024-07-25 11:53:45.729983] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1ef9660 was disconnected and freed. reset controller. 00:07:58.505 [2024-07-25 11:53:45.730879] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:07:58.505 task offset: 65408 on job bdev=Nvme0n1 fails 00:07:58.505 00:07:58.505 Latency(us) 00:07:58.505 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:58.505 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:07:58.505 Job: Nvme0n1 ended in about 0.56 seconds with error 00:07:58.505 Verification LBA range: start 0x0 length 0x400 00:07:58.505 Nvme0n1 : 0.56 904.79 56.55 113.32 0.00 61755.72 1503.05 61090.95 00:07:58.505 =================================================================================================================== 00:07:58.505 Total : 904.79 56.55 113.32 0.00 61755.72 1503.05 61090.95 00:07:58.505 [2024-07-25 11:53:45.732497] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:58.505 [2024-07-25 11:53:45.732510] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1ac7980 (9): Bad file descriptor 00:07:58.764 [2024-07-25 11:53:45.754297] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:07:59.705 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # kill -9 187803 00:07:59.705 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/host_management.sh: line 91: kill: (187803) - No such process 00:07:59.705 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@91 -- # true 00:07:59.705 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@97 -- # rm -f /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 /var/tmp/spdk_cpu_lock_003 /var/tmp/spdk_cpu_lock_004 00:07:59.705 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:07:59.705 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@100 -- # gen_nvmf_target_json 0 00:07:59.705 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # config=() 00:07:59.705 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@532 -- # local subsystem config 00:07:59.705 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:07:59.705 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:07:59.705 { 00:07:59.705 "params": { 00:07:59.705 "name": "Nvme$subsystem", 00:07:59.705 "trtype": "$TEST_TRANSPORT", 00:07:59.705 "traddr": "$NVMF_FIRST_TARGET_IP", 00:07:59.705 "adrfam": "ipv4", 00:07:59.705 "trsvcid": "$NVMF_PORT", 00:07:59.705 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:07:59.705 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:07:59.705 "hdgst": ${hdgst:-false}, 00:07:59.705 "ddgst": ${ddgst:-false} 00:07:59.705 }, 00:07:59.705 "method": "bdev_nvme_attach_controller" 00:07:59.705 } 00:07:59.705 EOF 00:07:59.705 )") 00:07:59.705 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@554 -- # cat 00:07:59.705 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@556 -- # jq . 00:07:59.705 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@557 -- # IFS=, 00:07:59.705 11:53:46 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:07:59.705 "params": { 00:07:59.705 "name": "Nvme0", 00:07:59.705 "trtype": "tcp", 00:07:59.705 "traddr": "10.0.0.2", 00:07:59.705 "adrfam": "ipv4", 00:07:59.705 "trsvcid": "4420", 00:07:59.705 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:07:59.705 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:07:59.705 "hdgst": false, 00:07:59.705 "ddgst": false 00:07:59.705 }, 00:07:59.705 "method": "bdev_nvme_attach_controller" 00:07:59.705 }' 00:07:59.705 [2024-07-25 11:53:46.781643] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:07:59.705 [2024-07-25 11:53:46.781691] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid188053 ] 00:07:59.705 EAL: No free 2048 kB hugepages reported on node 1 00:07:59.705 [2024-07-25 11:53:46.836715] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:59.705 [2024-07-25 11:53:46.908945] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:00.275 Running I/O for 1 seconds... 00:08:01.215 00:08:01.215 Latency(us) 00:08:01.215 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:01.215 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:08:01.215 Verification LBA range: start 0x0 length 0x400 00:08:01.215 Nvme0n1 : 1.10 930.38 58.15 0.00 0.00 65562.66 17210.32 61090.95 00:08:01.215 =================================================================================================================== 00:08:01.215 Total : 930.38 58.15 0.00 0.00 65562.66 17210.32 61090.95 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@102 -- # stoptarget 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@36 -- # rm -f ./local-job0-0-verify.state 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@37 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@38 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@40 -- # nvmftestfini 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@117 -- # sync 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@120 -- # set +e 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:01.475 rmmod nvme_tcp 00:08:01.475 rmmod nvme_fabrics 00:08:01.475 rmmod nvme_keyring 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@124 -- # set -e 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@125 -- # return 0 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@489 -- # '[' -n 187532 ']' 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@490 -- # killprocess 187532 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@948 -- # '[' -z 187532 ']' 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@952 -- # kill -0 187532 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # uname 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 187532 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@966 -- # echo 'killing process with pid 187532' 00:08:01.475 killing process with pid 187532 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@967 -- # kill 187532 00:08:01.475 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@972 -- # wait 187532 00:08:01.736 [2024-07-25 11:53:48.807202] app.c: 711:unclaim_cpu_cores: *ERROR*: Failed to unlink lock fd for core 1, errno: 2 00:08:01.736 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:01.736 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:01.736 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:01.736 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:01.736 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:01.736 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:01.736 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:01.736 11:53:48 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.647 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:03.647 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- target/host_management.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:08:03.647 00:08:03.647 real 0m12.708s 00:08:03.647 user 0m23.665s 00:08:03.647 sys 0m5.146s 00:08:03.647 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:03.647 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_host_management -- common/autotest_common.sh@10 -- # set +x 00:08:03.647 ************************************ 00:08:03.647 END TEST nvmf_host_management 00:08:03.647 ************************************ 00:08:03.907 11:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:08:03.907 11:53:50 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@27 -- # run_test nvmf_lvol /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:03.907 11:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:03.907 11:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:03.907 11:53:50 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:03.907 ************************************ 00:08:03.907 START TEST nvmf_lvol 00:08:03.907 ************************************ 00:08:03.908 11:53:50 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvol.sh --transport=tcp 00:08:03.908 * Looking for test storage... 00:08:03.908 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # uname -s 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@5 -- # export PATH 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@47 -- # : 0 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@11 -- # MALLOC_BDEV_SIZE=64 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@13 -- # LVOL_BDEV_INIT_SIZE=20 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@14 -- # LVOL_BDEV_FINAL_SIZE=30 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@18 -- # nvmftestinit 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@285 -- # xtrace_disable 00:08:03.908 11:53:51 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # pci_devs=() 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # net_devs=() 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # e810=() 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@296 -- # local -ga e810 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # x722=() 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@297 -- # local -ga x722 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # mlx=() 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@298 -- # local -ga mlx 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:09.187 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:09.187 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:09.187 Found net devices under 0000:86:00.0: cvl_0_0 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:09.187 Found net devices under 0000:86:00.1: cvl_0_1 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@414 -- # is_hw=yes 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:09.187 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:09.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:09.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.259 ms 00:08:09.446 00:08:09.446 --- 10.0.0.2 ping statistics --- 00:08:09.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.446 rtt min/avg/max/mdev = 0.259/0.259/0.259/0.000 ms 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:09.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:09.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:08:09.446 00:08:09.446 --- 10.0.0.1 ping statistics --- 00:08:09.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:09.446 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@422 -- # return 0 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@19 -- # nvmfappstart -m 0x7 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@481 -- # nvmfpid=191828 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@482 -- # waitforlisten 191828 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@829 -- # '[' -z 191828 ']' 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:09.446 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:09.446 11:53:56 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:09.446 [2024-07-25 11:53:56.642144] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:08:09.446 [2024-07-25 11:53:56.642188] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:09.446 EAL: No free 2048 kB hugepages reported on node 1 00:08:09.706 [2024-07-25 11:53:56.699627] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:08:09.706 [2024-07-25 11:53:56.779546] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:09.706 [2024-07-25 11:53:56.779579] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:09.706 [2024-07-25 11:53:56.779587] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:09.706 [2024-07-25 11:53:56.779592] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:09.706 [2024-07-25 11:53:56.779597] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:09.706 [2024-07-25 11:53:56.779636] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:09.706 [2024-07-25 11:53:56.779729] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:09.706 [2024-07-25 11:53:56.779730] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:08:10.274 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:10.274 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@862 -- # return 0 00:08:10.274 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:10.274 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:10.274 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:10.274 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:10.274 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:10.533 [2024-07-25 11:53:57.648582] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:10.533 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:10.793 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@24 -- # base_bdevs='Malloc0 ' 00:08:10.793 11:53:57 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:08:10.793 11:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@25 -- # base_bdevs+=Malloc1 00:08:10.793 11:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc0 Malloc1' 00:08:11.052 11:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore raid0 lvs 00:08:11.311 11:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@29 -- # lvs=71af68a9-f7b9-4278-a263-674984707d23 00:08:11.311 11:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 71af68a9-f7b9-4278-a263-674984707d23 lvol 20 00:08:11.570 11:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@32 -- # lvol=cc151a15-479d-42a5-9950-d7adfeb3a09f 00:08:11.570 11:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:11.570 11:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 cc151a15-479d-42a5-9950-d7adfeb3a09f 00:08:11.829 11:53:58 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:12.088 [2024-07-25 11:53:59.112964] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:12.088 11:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:12.088 11:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@42 -- # perf_pid=192318 00:08:12.088 11:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -o 4096 -q 128 -s 512 -w randwrite -t 10 -c 0x18 00:08:12.088 11:53:59 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@44 -- # sleep 1 00:08:12.346 EAL: No free 2048 kB hugepages reported on node 1 00:08:13.283 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_snapshot cc151a15-479d-42a5-9950-d7adfeb3a09f MY_SNAPSHOT 00:08:13.283 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@47 -- # snapshot=b23b7831-0c9d-4e0d-b4c8-cb2ec0d47251 00:08:13.283 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_resize cc151a15-479d-42a5-9950-d7adfeb3a09f 30 00:08:13.542 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_clone b23b7831-0c9d-4e0d-b4c8-cb2ec0d47251 MY_CLONE 00:08:13.801 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@49 -- # clone=70adf3be-50fe-47b1-b39c-57c302597e91 00:08:13.801 11:54:00 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_inflate 70adf3be-50fe-47b1-b39c-57c302597e91 00:08:14.369 11:54:01 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@53 -- # wait 192318 00:08:22.489 Initializing NVMe Controllers 00:08:22.489 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode0 00:08:22.489 Controller IO queue size 128, less than required. 00:08:22.489 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:08:22.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 3 00:08:22.489 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 with lcore 4 00:08:22.489 Initialization complete. Launching workers. 00:08:22.489 ======================================================== 00:08:22.489 Latency(us) 00:08:22.489 Device Information : IOPS MiB/s Average min max 00:08:22.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 3: 12285.20 47.99 10424.98 1995.63 71835.92 00:08:22.489 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode0) NSID 1 from core 4: 11839.10 46.25 10815.07 3799.72 60242.43 00:08:22.489 ======================================================== 00:08:22.490 Total : 24124.29 94.24 10616.42 1995.63 71835.92 00:08:22.490 00:08:22.490 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:22.748 11:54:09 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete cc151a15-479d-42a5-9950-d7adfeb3a09f 00:08:23.006 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 71af68a9-f7b9-4278-a263-674984707d23 00:08:23.006 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@60 -- # rm -f 00:08:23.006 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@62 -- # trap - SIGINT SIGTERM EXIT 00:08:23.006 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- target/nvmf_lvol.sh@64 -- # nvmftestfini 00:08:23.006 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@488 -- # nvmfcleanup 00:08:23.006 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@117 -- # sync 00:08:23.007 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:08:23.007 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@120 -- # set +e 00:08:23.007 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@121 -- # for i in {1..20} 00:08:23.007 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:08:23.007 rmmod nvme_tcp 00:08:23.007 rmmod nvme_fabrics 00:08:23.266 rmmod nvme_keyring 00:08:23.266 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:08:23.266 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@124 -- # set -e 00:08:23.266 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@125 -- # return 0 00:08:23.266 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@489 -- # '[' -n 191828 ']' 00:08:23.266 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@490 -- # killprocess 191828 00:08:23.266 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@948 -- # '[' -z 191828 ']' 00:08:23.266 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@952 -- # kill -0 191828 00:08:23.266 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # uname 00:08:23.266 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:23.266 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 191828 00:08:23.266 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:08:23.266 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:08:23.266 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@966 -- # echo 'killing process with pid 191828' 00:08:23.266 killing process with pid 191828 00:08:23.266 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@967 -- # kill 191828 00:08:23.266 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@972 -- # wait 191828 00:08:23.526 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:08:23.526 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:08:23.526 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:08:23.526 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:08:23.526 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@278 -- # remove_spdk_ns 00:08:23.526 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:23.526 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:23.526 11:54:10 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.436 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:08:25.436 00:08:25.436 real 0m21.657s 00:08:25.436 user 1m3.774s 00:08:25.436 sys 0m6.949s 00:08:25.436 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:25.436 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvol -- common/autotest_common.sh@10 -- # set +x 00:08:25.436 ************************************ 00:08:25.436 END TEST nvmf_lvol 00:08:25.436 ************************************ 00:08:25.436 11:54:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:08:25.436 11:54:12 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@28 -- # run_test nvmf_lvs_grow /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:25.436 11:54:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:25.436 11:54:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:25.436 11:54:12 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:08:25.697 ************************************ 00:08:25.697 START TEST nvmf_lvs_grow 00:08:25.697 ************************************ 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh --transport=tcp 00:08:25.697 * Looking for test storage... 00:08:25.697 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # uname -s 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@5 -- # export PATH 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@47 -- # : 0 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@51 -- # have_pci_nics=0 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@11 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@12 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@98 -- # nvmftestinit 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@448 -- # prepare_net_devs 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@410 -- # local -g is_hw=no 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@412 -- # remove_spdk_ns 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@285 -- # xtrace_disable 00:08:25.697 11:54:12 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # pci_devs=() 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@291 -- # local -a pci_devs 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # pci_net_devs=() 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # pci_drivers=() 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@293 -- # local -A pci_drivers 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # net_devs=() 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@295 -- # local -ga net_devs 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # e810=() 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@296 -- # local -ga e810 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # x722=() 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@297 -- # local -ga x722 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # mlx=() 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@298 -- # local -ga mlx 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:08:30.983 Found 0000:86:00.0 (0x8086 - 0x159b) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:08:30.983 Found 0000:86:00.1 (0x8086 - 0x159b) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:08:30.983 Found net devices under 0000:86:00.0: cvl_0_0 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@390 -- # [[ up == up ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:08:30.983 Found net devices under 0000:86:00.1: cvl_0_1 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@414 -- # is_hw=yes 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:08:30.983 11:54:17 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:08:30.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:08:30.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.156 ms 00:08:30.984 00:08:30.984 --- 10.0.0.2 ping statistics --- 00:08:30.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.984 rtt min/avg/max/mdev = 0.156/0.156/0.156/0.000 ms 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:08:30.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:08:30.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:08:30.984 00:08:30.984 --- 10.0.0.1 ping statistics --- 00:08:30.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:08:30.984 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@422 -- # return 0 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@99 -- # nvmfappstart -m 0x1 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@722 -- # xtrace_disable 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@481 -- # nvmfpid=198197 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@482 -- # waitforlisten 198197 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@829 -- # '[' -z 198197 ']' 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:30.984 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:30.984 11:54:18 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:31.245 [2024-07-25 11:54:18.236630] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:08:31.245 [2024-07-25 11:54:18.236674] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:08:31.245 EAL: No free 2048 kB hugepages reported on node 1 00:08:31.245 [2024-07-25 11:54:18.293927] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:31.245 [2024-07-25 11:54:18.373983] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:08:31.245 [2024-07-25 11:54:18.374021] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:08:31.245 [2024-07-25 11:54:18.374028] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:08:31.245 [2024-07-25 11:54:18.374034] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:08:31.245 [2024-07-25 11:54:18.374039] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:08:31.245 [2024-07-25 11:54:18.374061] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:08:31.817 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:31.817 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@862 -- # return 0 00:08:31.817 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:08:31.817 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@728 -- # xtrace_disable 00:08:31.817 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.079 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:08:32.079 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:08:32.079 [2024-07-25 11:54:19.221685] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:08:32.079 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@102 -- # run_test lvs_grow_clean lvs_grow 00:08:32.079 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:08:32.079 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:32.079 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:32.079 ************************************ 00:08:32.079 START TEST lvs_grow_clean 00:08:32.080 ************************************ 00:08:32.080 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1123 -- # lvs_grow 00:08:32.080 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:32.080 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:32.080 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:32.080 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:32.080 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:32.080 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:32.080 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.080 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.080 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:32.412 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:32.412 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:32.695 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@28 -- # lvs=91af8209-4059-42df-8b52-1fab736fd81a 00:08:32.695 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:32.695 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91af8209-4059-42df-8b52-1fab736fd81a 00:08:32.695 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:32.695 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:32.695 11:54:19 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u 91af8209-4059-42df-8b52-1fab736fd81a lvol 150 00:08:32.955 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@33 -- # lvol=4eda69ca-aeb7-41ba-b316-0b9e311672db 00:08:32.955 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:32.955 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:32.955 [2024-07-25 11:54:20.176317] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:32.955 [2024-07-25 11:54:20.176372] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:32.955 true 00:08:32.955 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91af8209-4059-42df-8b52-1fab736fd81a 00:08:32.955 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:33.215 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:33.215 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:33.475 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 4eda69ca-aeb7-41ba-b316-0b9e311672db 00:08:33.475 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:33.734 [2024-07-25 11:54:20.854365] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:33.734 11:54:20 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:33.994 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:33.994 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=198708 00:08:33.994 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:33.994 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 198708 /var/tmp/bdevperf.sock 00:08:33.994 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@829 -- # '[' -z 198708 ']' 00:08:33.994 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:33.994 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:33.994 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:33.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:33.994 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:33.994 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:33.994 [2024-07-25 11:54:21.079485] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:08:33.994 [2024-07-25 11:54:21.079534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid198708 ] 00:08:33.994 EAL: No free 2048 kB hugepages reported on node 1 00:08:33.994 [2024-07-25 11:54:21.133819] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:33.994 [2024-07-25 11:54:21.212396] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:34.933 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:34.933 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@862 -- # return 0 00:08:34.933 11:54:21 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:35.192 Nvme0n1 00:08:35.192 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:35.453 [ 00:08:35.453 { 00:08:35.453 "name": "Nvme0n1", 00:08:35.453 "aliases": [ 00:08:35.453 "4eda69ca-aeb7-41ba-b316-0b9e311672db" 00:08:35.453 ], 00:08:35.453 "product_name": "NVMe disk", 00:08:35.453 "block_size": 4096, 00:08:35.453 "num_blocks": 38912, 00:08:35.453 "uuid": "4eda69ca-aeb7-41ba-b316-0b9e311672db", 00:08:35.453 "assigned_rate_limits": { 00:08:35.453 "rw_ios_per_sec": 0, 00:08:35.453 "rw_mbytes_per_sec": 0, 00:08:35.453 "r_mbytes_per_sec": 0, 00:08:35.453 "w_mbytes_per_sec": 0 00:08:35.453 }, 00:08:35.453 "claimed": false, 00:08:35.453 "zoned": false, 00:08:35.453 "supported_io_types": { 00:08:35.453 "read": true, 00:08:35.453 "write": true, 00:08:35.453 "unmap": true, 00:08:35.453 "flush": true, 00:08:35.453 "reset": true, 00:08:35.453 "nvme_admin": true, 00:08:35.453 "nvme_io": true, 00:08:35.453 "nvme_io_md": false, 00:08:35.453 "write_zeroes": true, 00:08:35.453 "zcopy": false, 00:08:35.453 "get_zone_info": false, 00:08:35.453 "zone_management": false, 00:08:35.453 "zone_append": false, 00:08:35.453 "compare": true, 00:08:35.453 "compare_and_write": true, 00:08:35.453 "abort": true, 00:08:35.453 "seek_hole": false, 00:08:35.453 "seek_data": false, 00:08:35.453 "copy": true, 00:08:35.453 "nvme_iov_md": false 00:08:35.453 }, 00:08:35.453 "memory_domains": [ 00:08:35.453 { 00:08:35.453 "dma_device_id": "system", 00:08:35.453 "dma_device_type": 1 00:08:35.453 } 00:08:35.453 ], 00:08:35.453 "driver_specific": { 00:08:35.453 "nvme": [ 00:08:35.453 { 00:08:35.453 "trid": { 00:08:35.453 "trtype": "TCP", 00:08:35.453 "adrfam": "IPv4", 00:08:35.453 "traddr": "10.0.0.2", 00:08:35.453 "trsvcid": "4420", 00:08:35.453 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:35.453 }, 00:08:35.453 "ctrlr_data": { 00:08:35.453 "cntlid": 1, 00:08:35.453 "vendor_id": "0x8086", 00:08:35.453 "model_number": "SPDK bdev Controller", 00:08:35.453 "serial_number": "SPDK0", 00:08:35.453 "firmware_revision": "24.09", 00:08:35.453 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:35.453 "oacs": { 00:08:35.453 "security": 0, 00:08:35.453 "format": 0, 00:08:35.453 "firmware": 0, 00:08:35.453 "ns_manage": 0 00:08:35.453 }, 00:08:35.453 "multi_ctrlr": true, 00:08:35.453 "ana_reporting": false 00:08:35.453 }, 00:08:35.453 "vs": { 00:08:35.453 "nvme_version": "1.3" 00:08:35.453 }, 00:08:35.453 "ns_data": { 00:08:35.453 "id": 1, 00:08:35.453 "can_share": true 00:08:35.453 } 00:08:35.453 } 00:08:35.453 ], 00:08:35.453 "mp_policy": "active_passive" 00:08:35.453 } 00:08:35.453 } 00:08:35.453 ] 00:08:35.453 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=198941 00:08:35.453 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:35.453 11:54:22 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:35.453 Running I/O for 10 seconds... 00:08:36.391 Latency(us) 00:08:36.391 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:36.391 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:36.391 Nvme0n1 : 1.00 22121.00 86.41 0.00 0.00 0.00 0.00 0.00 00:08:36.391 =================================================================================================================== 00:08:36.391 Total : 22121.00 86.41 0.00 0.00 0.00 0.00 0.00 00:08:36.391 00:08:37.329 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u 91af8209-4059-42df-8b52-1fab736fd81a 00:08:37.329 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:37.329 Nvme0n1 : 2.00 22306.50 87.13 0.00 0.00 0.00 0.00 0.00 00:08:37.329 =================================================================================================================== 00:08:37.329 Total : 22306.50 87.13 0.00 0.00 0.00 0.00 0.00 00:08:37.329 00:08:37.589 true 00:08:37.589 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91af8209-4059-42df-8b52-1fab736fd81a 00:08:37.589 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:37.849 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:37.849 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:37.849 11:54:24 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@65 -- # wait 198941 00:08:38.418 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:38.418 Nvme0n1 : 3.00 22232.67 86.85 0.00 0.00 0.00 0.00 0.00 00:08:38.418 =================================================================================================================== 00:08:38.418 Total : 22232.67 86.85 0.00 0.00 0.00 0.00 0.00 00:08:38.418 00:08:39.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:39.355 Nvme0n1 : 4.00 22289.00 87.07 0.00 0.00 0.00 0.00 0.00 00:08:39.355 =================================================================================================================== 00:08:39.355 Total : 22289.00 87.07 0.00 0.00 0.00 0.00 0.00 00:08:39.355 00:08:40.743 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:40.743 Nvme0n1 : 5.00 22299.00 87.11 0.00 0.00 0.00 0.00 0.00 00:08:40.743 =================================================================================================================== 00:08:40.743 Total : 22299.00 87.11 0.00 0.00 0.00 0.00 0.00 00:08:40.743 00:08:41.685 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:41.685 Nvme0n1 : 6.00 22300.50 87.11 0.00 0.00 0.00 0.00 0.00 00:08:41.685 =================================================================================================================== 00:08:41.685 Total : 22300.50 87.11 0.00 0.00 0.00 0.00 0.00 00:08:41.685 00:08:42.625 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:42.625 Nvme0n1 : 7.00 22383.57 87.44 0.00 0.00 0.00 0.00 0.00 00:08:42.625 =================================================================================================================== 00:08:42.625 Total : 22383.57 87.44 0.00 0.00 0.00 0.00 0.00 00:08:42.625 00:08:43.565 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:43.565 Nvme0n1 : 8.00 22358.50 87.34 0.00 0.00 0.00 0.00 0.00 00:08:43.565 =================================================================================================================== 00:08:43.565 Total : 22358.50 87.34 0.00 0.00 0.00 0.00 0.00 00:08:43.565 00:08:44.503 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:44.503 Nvme0n1 : 9.00 22371.89 87.39 0.00 0.00 0.00 0.00 0.00 00:08:44.503 =================================================================================================================== 00:08:44.503 Total : 22371.89 87.39 0.00 0.00 0.00 0.00 0.00 00:08:44.503 00:08:45.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.443 Nvme0n1 : 10.00 22440.80 87.66 0.00 0.00 0.00 0.00 0.00 00:08:45.443 =================================================================================================================== 00:08:45.443 Total : 22440.80 87.66 0.00 0.00 0.00 0.00 0.00 00:08:45.443 00:08:45.443 00:08:45.443 Latency(us) 00:08:45.443 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.443 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:45.443 Nvme0n1 : 10.00 22446.14 87.68 0.00 0.00 5699.45 1538.67 11796.48 00:08:45.443 =================================================================================================================== 00:08:45.443 Total : 22446.14 87.68 0.00 0.00 5699.45 1538.67 11796.48 00:08:45.443 0 00:08:45.443 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@66 -- # killprocess 198708 00:08:45.443 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@948 -- # '[' -z 198708 ']' 00:08:45.443 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@952 -- # kill -0 198708 00:08:45.443 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # uname 00:08:45.443 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:08:45.443 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 198708 00:08:45.443 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:08:45.443 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:08:45.444 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 198708' 00:08:45.444 killing process with pid 198708 00:08:45.444 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@967 -- # kill 198708 00:08:45.444 Received shutdown signal, test time was about 10.000000 seconds 00:08:45.444 00:08:45.444 Latency(us) 00:08:45.444 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:45.444 =================================================================================================================== 00:08:45.444 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:08:45.444 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@972 -- # wait 198708 00:08:45.704 11:54:32 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:45.965 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:08:45.965 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91af8209-4059-42df-8b52-1fab736fd81a 00:08:45.965 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:08:46.230 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:08:46.230 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@72 -- # [[ '' == \d\i\r\t\y ]] 00:08:46.230 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:46.491 [2024-07-25 11:54:33.547626] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:08:46.491 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91af8209-4059-42df-8b52-1fab736fd81a 00:08:46.491 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@648 -- # local es=0 00:08:46.491 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91af8209-4059-42df-8b52-1fab736fd81a 00:08:46.491 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:46.491 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:46.491 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:46.491 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:46.491 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:46.491 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:08:46.491 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:08:46.491 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:08:46.491 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91af8209-4059-42df-8b52-1fab736fd81a 00:08:46.794 request: 00:08:46.794 { 00:08:46.794 "uuid": "91af8209-4059-42df-8b52-1fab736fd81a", 00:08:46.794 "method": "bdev_lvol_get_lvstores", 00:08:46.794 "req_id": 1 00:08:46.794 } 00:08:46.794 Got JSON-RPC error response 00:08:46.794 response: 00:08:46.794 { 00:08:46.794 "code": -19, 00:08:46.794 "message": "No such device" 00:08:46.794 } 00:08:46.794 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@651 -- # es=1 00:08:46.794 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:08:46.794 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:08:46.794 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:08:46.794 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:46.794 aio_bdev 00:08:46.794 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev 4eda69ca-aeb7-41ba-b316-0b9e311672db 00:08:46.794 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@897 -- # local bdev_name=4eda69ca-aeb7-41ba-b316-0b9e311672db 00:08:46.794 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:08:46.794 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@899 -- # local i 00:08:46.794 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:08:46.794 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:08:46.794 11:54:33 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:08:47.055 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b 4eda69ca-aeb7-41ba-b316-0b9e311672db -t 2000 00:08:47.055 [ 00:08:47.055 { 00:08:47.055 "name": "4eda69ca-aeb7-41ba-b316-0b9e311672db", 00:08:47.055 "aliases": [ 00:08:47.055 "lvs/lvol" 00:08:47.055 ], 00:08:47.055 "product_name": "Logical Volume", 00:08:47.055 "block_size": 4096, 00:08:47.055 "num_blocks": 38912, 00:08:47.055 "uuid": "4eda69ca-aeb7-41ba-b316-0b9e311672db", 00:08:47.055 "assigned_rate_limits": { 00:08:47.055 "rw_ios_per_sec": 0, 00:08:47.055 "rw_mbytes_per_sec": 0, 00:08:47.055 "r_mbytes_per_sec": 0, 00:08:47.055 "w_mbytes_per_sec": 0 00:08:47.055 }, 00:08:47.055 "claimed": false, 00:08:47.055 "zoned": false, 00:08:47.055 "supported_io_types": { 00:08:47.055 "read": true, 00:08:47.055 "write": true, 00:08:47.055 "unmap": true, 00:08:47.055 "flush": false, 00:08:47.055 "reset": true, 00:08:47.055 "nvme_admin": false, 00:08:47.055 "nvme_io": false, 00:08:47.055 "nvme_io_md": false, 00:08:47.055 "write_zeroes": true, 00:08:47.055 "zcopy": false, 00:08:47.055 "get_zone_info": false, 00:08:47.055 "zone_management": false, 00:08:47.055 "zone_append": false, 00:08:47.055 "compare": false, 00:08:47.055 "compare_and_write": false, 00:08:47.055 "abort": false, 00:08:47.055 "seek_hole": true, 00:08:47.055 "seek_data": true, 00:08:47.055 "copy": false, 00:08:47.055 "nvme_iov_md": false 00:08:47.055 }, 00:08:47.055 "driver_specific": { 00:08:47.055 "lvol": { 00:08:47.055 "lvol_store_uuid": "91af8209-4059-42df-8b52-1fab736fd81a", 00:08:47.055 "base_bdev": "aio_bdev", 00:08:47.055 "thin_provision": false, 00:08:47.055 "num_allocated_clusters": 38, 00:08:47.055 "snapshot": false, 00:08:47.055 "clone": false, 00:08:47.055 "esnap_clone": false 00:08:47.055 } 00:08:47.055 } 00:08:47.055 } 00:08:47.055 ] 00:08:47.055 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@905 -- # return 0 00:08:47.055 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91af8209-4059-42df-8b52-1fab736fd81a 00:08:47.055 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:08:47.316 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:08:47.316 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u 91af8209-4059-42df-8b52-1fab736fd81a 00:08:47.316 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:08:47.576 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:08:47.576 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete 4eda69ca-aeb7-41ba-b316-0b9e311672db 00:08:47.576 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 91af8209-4059-42df-8b52-1fab736fd81a 00:08:47.837 11:54:34 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:08:48.097 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.097 00:08:48.097 real 0m15.908s 00:08:48.097 user 0m15.459s 00:08:48.097 sys 0m1.533s 00:08:48.097 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:08:48.097 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_clean -- common/autotest_common.sh@10 -- # set +x 00:08:48.097 ************************************ 00:08:48.097 END TEST lvs_grow_clean 00:08:48.097 ************************************ 00:08:48.097 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:08:48.097 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@103 -- # run_test lvs_grow_dirty lvs_grow dirty 00:08:48.097 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:08:48.097 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1105 -- # xtrace_disable 00:08:48.097 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:08:48.097 ************************************ 00:08:48.097 START TEST lvs_grow_dirty 00:08:48.097 ************************************ 00:08:48.097 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1123 -- # lvs_grow dirty 00:08:48.097 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@15 -- # local aio_bdev lvs lvol 00:08:48.097 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@16 -- # local data_clusters free_clusters 00:08:48.097 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@17 -- # local bdevperf_pid run_test_pid 00:08:48.097 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@18 -- # local aio_init_size_mb=200 00:08:48.097 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@19 -- # local aio_final_size_mb=400 00:08:48.097 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@20 -- # local lvol_bdev_size_mb=150 00:08:48.097 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@23 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.097 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@24 -- # truncate -s 200M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.097 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:08:48.357 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@25 -- # aio_bdev=aio_bdev 00:08:48.357 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create_lvstore --cluster-sz 4194304 --md-pages-per-cluster-ratio 300 aio_bdev lvs 00:08:48.618 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@28 -- # lvs=fca6cb9b-2016-4c75-a12a-8acb1f7c222c 00:08:48.618 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fca6cb9b-2016-4c75-a12a-8acb1f7c222c 00:08:48.618 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # jq -r '.[0].total_data_clusters' 00:08:48.618 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@29 -- # data_clusters=49 00:08:48.618 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@30 -- # (( data_clusters == 49 )) 00:08:48.618 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_create -u fca6cb9b-2016-4c75-a12a-8acb1f7c222c lvol 150 00:08:48.878 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@33 -- # lvol=c688ffec-0163-42f7-a6f4-c72e26815fe5 00:08:48.878 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@36 -- # truncate -s 400M /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:08:48.878 11:54:35 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_rescan aio_bdev 00:08:49.138 [2024-07-25 11:54:36.137791] bdev_aio.c:1030:bdev_aio_rescan: *NOTICE*: AIO device is resized: bdev name /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev, old block count 51200, new block count 102400 00:08:49.138 [2024-07-25 11:54:36.137846] vbdev_lvol.c: 165:vbdev_lvs_base_bdev_event_cb: *NOTICE*: Unsupported bdev event: type 1 00:08:49.138 true 00:08:49.138 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # jq -r '.[0].total_data_clusters' 00:08:49.138 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fca6cb9b-2016-4c75-a12a-8acb1f7c222c 00:08:49.138 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@38 -- # (( data_clusters == 49 )) 00:08:49.138 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a -s SPDK0 00:08:49.397 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 c688ffec-0163-42f7-a6f4-c72e26815fe5 00:08:49.657 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:08:49.657 [2024-07-25 11:54:36.811780] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:08:49.657 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:08:49.919 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@48 -- # bdevperf_pid=201310 00:08:49.919 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@49 -- # trap 'killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:08:49.919 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@50 -- # waitforlisten 201310 /var/tmp/bdevperf.sock 00:08:49.919 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 201310 ']' 00:08:49.919 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:08:49.919 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:08:49.919 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock -m 0x2 -o 4096 -q 128 -w randwrite -t 10 -S 1 -z 00:08:49.919 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:08:49.919 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:08:49.919 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:08:49.919 11:54:36 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:08:49.919 [2024-07-25 11:54:37.037124] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:08:49.919 [2024-07-25 11:54:37.037171] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid201310 ] 00:08:49.919 EAL: No free 2048 kB hugepages reported on node 1 00:08:49.919 [2024-07-25 11:54:37.089310] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:08:49.919 [2024-07-25 11:54:37.162666] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:08:50.859 11:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:08:50.859 11:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:08:50.859 11:54:37 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 00:08:50.859 Nvme0n1 00:08:51.120 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_get_bdevs -b Nvme0n1 -t 3000 00:08:51.120 [ 00:08:51.120 { 00:08:51.120 "name": "Nvme0n1", 00:08:51.120 "aliases": [ 00:08:51.120 "c688ffec-0163-42f7-a6f4-c72e26815fe5" 00:08:51.120 ], 00:08:51.120 "product_name": "NVMe disk", 00:08:51.120 "block_size": 4096, 00:08:51.120 "num_blocks": 38912, 00:08:51.120 "uuid": "c688ffec-0163-42f7-a6f4-c72e26815fe5", 00:08:51.120 "assigned_rate_limits": { 00:08:51.120 "rw_ios_per_sec": 0, 00:08:51.120 "rw_mbytes_per_sec": 0, 00:08:51.120 "r_mbytes_per_sec": 0, 00:08:51.120 "w_mbytes_per_sec": 0 00:08:51.120 }, 00:08:51.120 "claimed": false, 00:08:51.120 "zoned": false, 00:08:51.120 "supported_io_types": { 00:08:51.120 "read": true, 00:08:51.120 "write": true, 00:08:51.120 "unmap": true, 00:08:51.120 "flush": true, 00:08:51.120 "reset": true, 00:08:51.120 "nvme_admin": true, 00:08:51.120 "nvme_io": true, 00:08:51.120 "nvme_io_md": false, 00:08:51.120 "write_zeroes": true, 00:08:51.120 "zcopy": false, 00:08:51.120 "get_zone_info": false, 00:08:51.120 "zone_management": false, 00:08:51.120 "zone_append": false, 00:08:51.120 "compare": true, 00:08:51.120 "compare_and_write": true, 00:08:51.120 "abort": true, 00:08:51.120 "seek_hole": false, 00:08:51.120 "seek_data": false, 00:08:51.120 "copy": true, 00:08:51.120 "nvme_iov_md": false 00:08:51.120 }, 00:08:51.120 "memory_domains": [ 00:08:51.120 { 00:08:51.120 "dma_device_id": "system", 00:08:51.120 "dma_device_type": 1 00:08:51.120 } 00:08:51.120 ], 00:08:51.120 "driver_specific": { 00:08:51.120 "nvme": [ 00:08:51.120 { 00:08:51.120 "trid": { 00:08:51.120 "trtype": "TCP", 00:08:51.120 "adrfam": "IPv4", 00:08:51.120 "traddr": "10.0.0.2", 00:08:51.120 "trsvcid": "4420", 00:08:51.120 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:08:51.120 }, 00:08:51.120 "ctrlr_data": { 00:08:51.120 "cntlid": 1, 00:08:51.120 "vendor_id": "0x8086", 00:08:51.120 "model_number": "SPDK bdev Controller", 00:08:51.120 "serial_number": "SPDK0", 00:08:51.120 "firmware_revision": "24.09", 00:08:51.120 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:08:51.120 "oacs": { 00:08:51.120 "security": 0, 00:08:51.120 "format": 0, 00:08:51.120 "firmware": 0, 00:08:51.120 "ns_manage": 0 00:08:51.120 }, 00:08:51.120 "multi_ctrlr": true, 00:08:51.120 "ana_reporting": false 00:08:51.120 }, 00:08:51.120 "vs": { 00:08:51.120 "nvme_version": "1.3" 00:08:51.120 }, 00:08:51.120 "ns_data": { 00:08:51.120 "id": 1, 00:08:51.120 "can_share": true 00:08:51.120 } 00:08:51.120 } 00:08:51.120 ], 00:08:51.120 "mp_policy": "active_passive" 00:08:51.120 } 00:08:51.120 } 00:08:51.120 ] 00:08:51.120 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@56 -- # run_test_pid=201544 00:08:51.120 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@57 -- # sleep 2 00:08:51.120 11:54:38 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:08:51.120 Running I/O for 10 seconds... 00:08:52.501 Latency(us) 00:08:52.501 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:08:52.501 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:52.501 Nvme0n1 : 1.00 21803.00 85.17 0.00 0.00 0.00 0.00 0.00 00:08:52.501 =================================================================================================================== 00:08:52.501 Total : 21803.00 85.17 0.00 0.00 0.00 0.00 0.00 00:08:52.501 00:08:53.071 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_grow_lvstore -u fca6cb9b-2016-4c75-a12a-8acb1f7c222c 00:08:53.331 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:53.331 Nvme0n1 : 2.00 22189.50 86.68 0.00 0.00 0.00 0.00 0.00 00:08:53.331 =================================================================================================================== 00:08:53.331 Total : 22189.50 86.68 0.00 0.00 0.00 0.00 0.00 00:08:53.331 00:08:53.331 true 00:08:53.331 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fca6cb9b-2016-4c75-a12a-8acb1f7c222c 00:08:53.331 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # jq -r '.[0].total_data_clusters' 00:08:53.591 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@61 -- # data_clusters=99 00:08:53.592 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@62 -- # (( data_clusters == 99 )) 00:08:53.592 11:54:40 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@65 -- # wait 201544 00:08:54.162 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:54.162 Nvme0n1 : 3.00 22370.00 87.38 0.00 0.00 0.00 0.00 0.00 00:08:54.162 =================================================================================================================== 00:08:54.162 Total : 22370.00 87.38 0.00 0.00 0.00 0.00 0.00 00:08:54.162 00:08:55.543 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:55.543 Nvme0n1 : 4.00 22415.00 87.56 0.00 0.00 0.00 0.00 0.00 00:08:55.543 =================================================================================================================== 00:08:55.543 Total : 22415.00 87.56 0.00 0.00 0.00 0.00 0.00 00:08:55.543 00:08:56.479 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:56.479 Nvme0n1 : 5.00 22468.40 87.77 0.00 0.00 0.00 0.00 0.00 00:08:56.479 =================================================================================================================== 00:08:56.479 Total : 22468.40 87.77 0.00 0.00 0.00 0.00 0.00 00:08:56.479 00:08:57.416 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:57.416 Nvme0n1 : 6.00 22492.00 87.86 0.00 0.00 0.00 0.00 0.00 00:08:57.416 =================================================================================================================== 00:08:57.416 Total : 22492.00 87.86 0.00 0.00 0.00 0.00 0.00 00:08:57.416 00:08:58.355 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:58.355 Nvme0n1 : 7.00 22498.14 87.88 0.00 0.00 0.00 0.00 0.00 00:08:58.355 =================================================================================================================== 00:08:58.355 Total : 22498.14 87.88 0.00 0.00 0.00 0.00 0.00 00:08:58.355 00:08:59.295 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:08:59.295 Nvme0n1 : 8.00 22507.38 87.92 0.00 0.00 0.00 0.00 0.00 00:08:59.295 =================================================================================================================== 00:08:59.295 Total : 22507.38 87.92 0.00 0.00 0.00 0.00 0.00 00:08:59.295 00:09:00.233 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:00.233 Nvme0n1 : 9.00 22550.00 88.09 0.00 0.00 0.00 0.00 0.00 00:09:00.233 =================================================================================================================== 00:09:00.233 Total : 22550.00 88.09 0.00 0.00 0.00 0.00 0.00 00:09:00.233 00:09:01.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.173 Nvme0n1 : 10.00 22561.70 88.13 0.00 0.00 0.00 0.00 0.00 00:09:01.173 =================================================================================================================== 00:09:01.173 Total : 22561.70 88.13 0.00 0.00 0.00 0.00 0.00 00:09:01.173 00:09:01.173 00:09:01.173 Latency(us) 00:09:01.173 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.173 Job: Nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:09:01.173 Nvme0n1 : 10.01 22560.80 88.13 0.00 0.00 5669.35 2464.72 24846.69 00:09:01.173 =================================================================================================================== 00:09:01.173 Total : 22560.80 88.13 0.00 0.00 5669.35 2464.72 24846.69 00:09:01.173 0 00:09:01.173 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@66 -- # killprocess 201310 00:09:01.173 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@948 -- # '[' -z 201310 ']' 00:09:01.173 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@952 -- # kill -0 201310 00:09:01.173 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # uname 00:09:01.173 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:01.173 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 201310 00:09:01.433 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:01.433 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:01.433 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@966 -- # echo 'killing process with pid 201310' 00:09:01.433 killing process with pid 201310 00:09:01.433 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@967 -- # kill 201310 00:09:01.433 Received shutdown signal, test time was about 10.000000 seconds 00:09:01.433 00:09:01.433 Latency(us) 00:09:01.433 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:01.433 =================================================================================================================== 00:09:01.433 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:01.433 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@972 -- # wait 201310 00:09:01.433 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:01.693 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@69 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:09:01.953 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fca6cb9b-2016-4c75-a12a-8acb1f7c222c 00:09:01.953 11:54:48 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # jq -r '.[0].free_clusters' 00:09:01.953 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@70 -- # free_clusters=61 00:09:01.953 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@72 -- # [[ dirty == \d\i\r\t\y ]] 00:09:01.953 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@74 -- # kill -9 198197 00:09:01.953 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # wait 198197 00:09:01.953 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_lvs_grow.sh: line 75: 198197 Killed "${NVMF_APP[@]}" "$@" 00:09:01.953 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@75 -- # true 00:09:01.953 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@76 -- # nvmfappstart -m 0x1 00:09:01.953 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:01.953 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:01.953 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:01.953 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@481 -- # nvmfpid=203391 00:09:01.953 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@482 -- # waitforlisten 203391 00:09:01.953 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@829 -- # '[' -z 203391 ']' 00:09:01.953 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:01.953 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:01.953 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:01.953 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:01.953 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:01.953 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:01.953 11:54:49 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:09:02.213 [2024-07-25 11:54:49.247226] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:09:02.213 [2024-07-25 11:54:49.247273] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:02.213 EAL: No free 2048 kB hugepages reported on node 1 00:09:02.213 [2024-07-25 11:54:49.304631] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:02.213 [2024-07-25 11:54:49.383686] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:02.213 [2024-07-25 11:54:49.383719] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:02.213 [2024-07-25 11:54:49.383726] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:02.213 [2024-07-25 11:54:49.383731] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:02.213 [2024-07-25 11:54:49.383737] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:02.213 [2024-07-25 11:54:49.383757] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:03.152 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:03.152 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@862 -- # return 0 00:09:03.152 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:03.152 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:03.152 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:03.152 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:03.152 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:03.152 [2024-07-25 11:54:50.225366] blobstore.c:4865:bs_recover: *NOTICE*: Performing recovery on blobstore 00:09:03.152 [2024-07-25 11:54:50.225445] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:09:03.152 [2024-07-25 11:54:50.225469] blobstore.c:4812:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:09:03.152 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@77 -- # aio_bdev=aio_bdev 00:09:03.152 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@78 -- # waitforbdev c688ffec-0163-42f7-a6f4-c72e26815fe5 00:09:03.152 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=c688ffec-0163-42f7-a6f4-c72e26815fe5 00:09:03.152 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:03.153 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:03.153 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:03.153 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:03.153 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:03.412 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c688ffec-0163-42f7-a6f4-c72e26815fe5 -t 2000 00:09:03.412 [ 00:09:03.412 { 00:09:03.412 "name": "c688ffec-0163-42f7-a6f4-c72e26815fe5", 00:09:03.412 "aliases": [ 00:09:03.412 "lvs/lvol" 00:09:03.412 ], 00:09:03.412 "product_name": "Logical Volume", 00:09:03.412 "block_size": 4096, 00:09:03.412 "num_blocks": 38912, 00:09:03.412 "uuid": "c688ffec-0163-42f7-a6f4-c72e26815fe5", 00:09:03.412 "assigned_rate_limits": { 00:09:03.412 "rw_ios_per_sec": 0, 00:09:03.412 "rw_mbytes_per_sec": 0, 00:09:03.412 "r_mbytes_per_sec": 0, 00:09:03.412 "w_mbytes_per_sec": 0 00:09:03.412 }, 00:09:03.412 "claimed": false, 00:09:03.412 "zoned": false, 00:09:03.412 "supported_io_types": { 00:09:03.412 "read": true, 00:09:03.412 "write": true, 00:09:03.412 "unmap": true, 00:09:03.412 "flush": false, 00:09:03.412 "reset": true, 00:09:03.412 "nvme_admin": false, 00:09:03.412 "nvme_io": false, 00:09:03.412 "nvme_io_md": false, 00:09:03.412 "write_zeroes": true, 00:09:03.412 "zcopy": false, 00:09:03.412 "get_zone_info": false, 00:09:03.412 "zone_management": false, 00:09:03.412 "zone_append": false, 00:09:03.412 "compare": false, 00:09:03.412 "compare_and_write": false, 00:09:03.412 "abort": false, 00:09:03.413 "seek_hole": true, 00:09:03.413 "seek_data": true, 00:09:03.413 "copy": false, 00:09:03.413 "nvme_iov_md": false 00:09:03.413 }, 00:09:03.413 "driver_specific": { 00:09:03.413 "lvol": { 00:09:03.413 "lvol_store_uuid": "fca6cb9b-2016-4c75-a12a-8acb1f7c222c", 00:09:03.413 "base_bdev": "aio_bdev", 00:09:03.413 "thin_provision": false, 00:09:03.413 "num_allocated_clusters": 38, 00:09:03.413 "snapshot": false, 00:09:03.413 "clone": false, 00:09:03.413 "esnap_clone": false 00:09:03.413 } 00:09:03.413 } 00:09:03.413 } 00:09:03.413 ] 00:09:03.413 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:03.413 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fca6cb9b-2016-4c75-a12a-8acb1f7c222c 00:09:03.413 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # jq -r '.[0].free_clusters' 00:09:03.673 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@79 -- # (( free_clusters == 61 )) 00:09:03.673 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fca6cb9b-2016-4c75-a12a-8acb1f7c222c 00:09:03.673 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # jq -r '.[0].total_data_clusters' 00:09:03.933 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@80 -- # (( data_clusters == 99 )) 00:09:03.933 11:54:50 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:03.933 [2024-07-25 11:54:51.089822] vbdev_lvol.c: 150:vbdev_lvs_hotremove_cb: *NOTICE*: bdev aio_bdev being removed: closing lvstore lvs 00:09:03.933 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@85 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fca6cb9b-2016-4c75-a12a-8acb1f7c222c 00:09:03.933 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@648 -- # local es=0 00:09:03.933 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fca6cb9b-2016-4c75-a12a-8acb1f7c222c 00:09:03.933 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.933 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:03.933 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.933 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:03.933 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.933 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:09:03.933 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:03.933 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:09:03.933 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fca6cb9b-2016-4c75-a12a-8acb1f7c222c 00:09:04.193 request: 00:09:04.193 { 00:09:04.193 "uuid": "fca6cb9b-2016-4c75-a12a-8acb1f7c222c", 00:09:04.193 "method": "bdev_lvol_get_lvstores", 00:09:04.193 "req_id": 1 00:09:04.193 } 00:09:04.193 Got JSON-RPC error response 00:09:04.193 response: 00:09:04.193 { 00:09:04.193 "code": -19, 00:09:04.193 "message": "No such device" 00:09:04.193 } 00:09:04.193 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@651 -- # es=1 00:09:04.193 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:09:04.193 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:09:04.193 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:09:04.193 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_create /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev aio_bdev 4096 00:09:04.453 aio_bdev 00:09:04.453 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@87 -- # waitforbdev c688ffec-0163-42f7-a6f4-c72e26815fe5 00:09:04.453 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@897 -- # local bdev_name=c688ffec-0163-42f7-a6f4-c72e26815fe5 00:09:04.453 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@898 -- # local bdev_timeout= 00:09:04.453 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@899 -- # local i 00:09:04.453 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # [[ -z '' ]] 00:09:04.453 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@900 -- # bdev_timeout=2000 00:09:04.453 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@902 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_wait_for_examine 00:09:04.453 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@904 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_get_bdevs -b c688ffec-0163-42f7-a6f4-c72e26815fe5 -t 2000 00:09:04.713 [ 00:09:04.713 { 00:09:04.713 "name": "c688ffec-0163-42f7-a6f4-c72e26815fe5", 00:09:04.713 "aliases": [ 00:09:04.713 "lvs/lvol" 00:09:04.713 ], 00:09:04.713 "product_name": "Logical Volume", 00:09:04.713 "block_size": 4096, 00:09:04.713 "num_blocks": 38912, 00:09:04.713 "uuid": "c688ffec-0163-42f7-a6f4-c72e26815fe5", 00:09:04.713 "assigned_rate_limits": { 00:09:04.713 "rw_ios_per_sec": 0, 00:09:04.713 "rw_mbytes_per_sec": 0, 00:09:04.713 "r_mbytes_per_sec": 0, 00:09:04.713 "w_mbytes_per_sec": 0 00:09:04.713 }, 00:09:04.713 "claimed": false, 00:09:04.713 "zoned": false, 00:09:04.713 "supported_io_types": { 00:09:04.713 "read": true, 00:09:04.713 "write": true, 00:09:04.713 "unmap": true, 00:09:04.713 "flush": false, 00:09:04.713 "reset": true, 00:09:04.713 "nvme_admin": false, 00:09:04.713 "nvme_io": false, 00:09:04.713 "nvme_io_md": false, 00:09:04.713 "write_zeroes": true, 00:09:04.713 "zcopy": false, 00:09:04.713 "get_zone_info": false, 00:09:04.713 "zone_management": false, 00:09:04.713 "zone_append": false, 00:09:04.713 "compare": false, 00:09:04.713 "compare_and_write": false, 00:09:04.713 "abort": false, 00:09:04.713 "seek_hole": true, 00:09:04.713 "seek_data": true, 00:09:04.713 "copy": false, 00:09:04.713 "nvme_iov_md": false 00:09:04.713 }, 00:09:04.713 "driver_specific": { 00:09:04.713 "lvol": { 00:09:04.713 "lvol_store_uuid": "fca6cb9b-2016-4c75-a12a-8acb1f7c222c", 00:09:04.713 "base_bdev": "aio_bdev", 00:09:04.713 "thin_provision": false, 00:09:04.713 "num_allocated_clusters": 38, 00:09:04.713 "snapshot": false, 00:09:04.713 "clone": false, 00:09:04.713 "esnap_clone": false 00:09:04.713 } 00:09:04.713 } 00:09:04.713 } 00:09:04.713 ] 00:09:04.713 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@905 -- # return 0 00:09:04.713 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fca6cb9b-2016-4c75-a12a-8acb1f7c222c 00:09:04.713 11:54:51 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # jq -r '.[0].free_clusters' 00:09:04.974 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@88 -- # (( free_clusters == 61 )) 00:09:04.974 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_get_lvstores -u fca6cb9b-2016-4c75-a12a-8acb1f7c222c 00:09:04.974 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # jq -r '.[0].total_data_clusters' 00:09:04.974 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@89 -- # (( data_clusters == 99 )) 00:09:04.974 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@92 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete c688ffec-0163-42f7-a6f4-c72e26815fe5 00:09:05.234 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u fca6cb9b-2016-4c75-a12a-8acb1f7c222c 00:09:05.495 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@94 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_aio_delete aio_bdev 00:09:05.495 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- target/nvmf_lvs_grow.sh@95 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/aio_bdev 00:09:05.756 00:09:05.756 real 0m17.498s 00:09:05.756 user 0m44.826s 00:09:05.756 sys 0m3.925s 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow.lvs_grow_dirty -- common/autotest_common.sh@10 -- # set +x 00:09:05.756 ************************************ 00:09:05.756 END TEST lvs_grow_dirty 00:09:05.756 ************************************ 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1142 -- # return 0 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # process_shm --id 0 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@806 -- # type=--id 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@807 -- # id=0 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@818 -- # for n in $shm_files 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:09:05.756 nvmf_trace.0 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@821 -- # return 0 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- target/nvmf_lvs_grow.sh@1 -- # nvmftestfini 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@117 -- # sync 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@120 -- # set +e 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:05.756 rmmod nvme_tcp 00:09:05.756 rmmod nvme_fabrics 00:09:05.756 rmmod nvme_keyring 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@124 -- # set -e 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@125 -- # return 0 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@489 -- # '[' -n 203391 ']' 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@490 -- # killprocess 203391 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@948 -- # '[' -z 203391 ']' 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@952 -- # kill -0 203391 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # uname 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 203391 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@966 -- # echo 'killing process with pid 203391' 00:09:05.756 killing process with pid 203391 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@967 -- # kill 203391 00:09:05.756 11:54:52 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@972 -- # wait 203391 00:09:06.017 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:06.017 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:06.017 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:06.017 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:06.017 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:06.017 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:06.017 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:06.017 11:54:53 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:07.990 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:07.990 00:09:07.990 real 0m42.491s 00:09:07.990 user 1m6.029s 00:09:07.990 sys 0m9.906s 00:09:07.990 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:07.990 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_lvs_grow -- common/autotest_common.sh@10 -- # set +x 00:09:07.990 ************************************ 00:09:07.990 END TEST nvmf_lvs_grow 00:09:07.990 ************************************ 00:09:07.990 11:54:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:07.990 11:54:55 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@29 -- # run_test nvmf_bdev_io_wait /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:07.990 11:54:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:07.990 11:54:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:07.990 11:54:55 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:08.250 ************************************ 00:09:08.250 START TEST nvmf_bdev_io_wait 00:09:08.250 ************************************ 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdev_io_wait.sh --transport=tcp 00:09:08.250 * Looking for test storage... 00:09:08.250 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # uname -s 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@5 -- # export PATH 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@47 -- # : 0 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@14 -- # nvmftestinit 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@285 -- # xtrace_disable 00:09:08.250 11:54:55 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # pci_devs=() 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # net_devs=() 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # e810=() 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@296 -- # local -ga e810 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # x722=() 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@297 -- # local -ga x722 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # mlx=() 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@298 -- # local -ga mlx 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:13.533 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:13.533 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:13.533 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:13.534 Found net devices under 0000:86:00.0: cvl_0_0 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:13.534 Found net devices under 0000:86:00.1: cvl_0_1 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@414 -- # is_hw=yes 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:13.534 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:13.794 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:13.794 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:13.794 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:13.794 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:13.794 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:13.794 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:13.794 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:13.794 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:13.794 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.186 ms 00:09:13.794 00:09:13.794 --- 10.0.0.2 ping statistics --- 00:09:13.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.794 rtt min/avg/max/mdev = 0.186/0.186/0.186/0.000 ms 00:09:13.794 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:13.794 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:13.794 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.398 ms 00:09:13.794 00:09:13.794 --- 10.0.0.1 ping statistics --- 00:09:13.794 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:13.794 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:09:13.794 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:13.794 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@422 -- # return 0 00:09:13.794 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:13.794 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:13.794 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:13.794 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:13.794 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:13.794 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:13.794 11:55:00 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:13.794 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@15 -- # nvmfappstart -m 0xF --wait-for-rpc 00:09:13.794 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:13.794 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:13.794 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:13.794 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@481 -- # nvmfpid=207642 00:09:13.794 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@482 -- # waitforlisten 207642 00:09:13.794 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:09:13.794 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@829 -- # '[' -z 207642 ']' 00:09:13.794 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:13.794 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:13.794 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:13.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:13.794 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:13.794 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.054 [2024-07-25 11:55:01.060562] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:09:14.054 [2024-07-25 11:55:01.060602] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:14.054 EAL: No free 2048 kB hugepages reported on node 1 00:09:14.054 [2024-07-25 11:55:01.116602] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:09:14.054 [2024-07-25 11:55:01.191923] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:14.054 [2024-07-25 11:55:01.191962] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:14.054 [2024-07-25 11:55:01.191969] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:14.054 [2024-07-25 11:55:01.191975] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:14.054 [2024-07-25 11:55:01.191981] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:14.054 [2024-07-25 11:55:01.192022] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:14.054 [2024-07-25 11:55:01.192041] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:09:14.054 [2024-07-25 11:55:01.192103] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:09:14.054 [2024-07-25 11:55:01.192104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:14.623 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:14.623 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@862 -- # return 0 00:09:14.623 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:14.623 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:14.623 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.882 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:14.882 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@18 -- # rpc_cmd bdev_set_options -p 5 -c 1 00:09:14.882 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.882 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.882 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.882 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@19 -- # rpc_cmd framework_start_init 00:09:14.882 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.882 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.882 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.882 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:14.882 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.882 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.883 [2024-07-25 11:55:01.978200] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:14.883 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.883 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:14.883 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.883 11:55:01 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.883 Malloc0 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@23 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:14.883 [2024-07-25 11:55:02.034503] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@28 -- # WRITE_PID=207704 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x10 -i 1 --json /dev/fd/63 -q 128 -o 4096 -w write -t 1 -s 256 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@27 -- # gen_nvmf_target_json 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@30 -- # READ_PID=207706 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:14.883 { 00:09:14.883 "params": { 00:09:14.883 "name": "Nvme$subsystem", 00:09:14.883 "trtype": "$TEST_TRANSPORT", 00:09:14.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:14.883 "adrfam": "ipv4", 00:09:14.883 "trsvcid": "$NVMF_PORT", 00:09:14.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:14.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:14.883 "hdgst": ${hdgst:-false}, 00:09:14.883 "ddgst": ${ddgst:-false} 00:09:14.883 }, 00:09:14.883 "method": "bdev_nvme_attach_controller" 00:09:14.883 } 00:09:14.883 EOF 00:09:14.883 )") 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x20 -i 2 --json /dev/fd/63 -q 128 -o 4096 -w read -t 1 -s 256 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@32 -- # FLUSH_PID=207708 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@29 -- # gen_nvmf_target_json 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:14.883 { 00:09:14.883 "params": { 00:09:14.883 "name": "Nvme$subsystem", 00:09:14.883 "trtype": "$TEST_TRANSPORT", 00:09:14.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:14.883 "adrfam": "ipv4", 00:09:14.883 "trsvcid": "$NVMF_PORT", 00:09:14.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:14.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:14.883 "hdgst": ${hdgst:-false}, 00:09:14.883 "ddgst": ${ddgst:-false} 00:09:14.883 }, 00:09:14.883 "method": "bdev_nvme_attach_controller" 00:09:14.883 } 00:09:14.883 EOF 00:09:14.883 )") 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x40 -i 3 --json /dev/fd/63 -q 128 -o 4096 -w flush -t 1 -s 256 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@34 -- # UNMAP_PID=207711 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@31 -- # gen_nvmf_target_json 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@35 -- # sync 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x80 -i 4 --json /dev/fd/63 -q 128 -o 4096 -w unmap -t 1 -s 256 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@33 -- # gen_nvmf_target_json 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:14.883 { 00:09:14.883 "params": { 00:09:14.883 "name": "Nvme$subsystem", 00:09:14.883 "trtype": "$TEST_TRANSPORT", 00:09:14.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:14.883 "adrfam": "ipv4", 00:09:14.883 "trsvcid": "$NVMF_PORT", 00:09:14.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:14.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:14.883 "hdgst": ${hdgst:-false}, 00:09:14.883 "ddgst": ${ddgst:-false} 00:09:14.883 }, 00:09:14.883 "method": "bdev_nvme_attach_controller" 00:09:14.883 } 00:09:14.883 EOF 00:09:14.883 )") 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # config=() 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@532 -- # local subsystem config 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:14.883 { 00:09:14.883 "params": { 00:09:14.883 "name": "Nvme$subsystem", 00:09:14.883 "trtype": "$TEST_TRANSPORT", 00:09:14.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:14.883 "adrfam": "ipv4", 00:09:14.883 "trsvcid": "$NVMF_PORT", 00:09:14.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:14.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:14.883 "hdgst": ${hdgst:-false}, 00:09:14.883 "ddgst": ${ddgst:-false} 00:09:14.883 }, 00:09:14.883 "method": "bdev_nvme_attach_controller" 00:09:14.883 } 00:09:14.883 EOF 00:09:14.883 )") 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@37 -- # wait 207704 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@554 -- # cat 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:14.883 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:14.883 "params": { 00:09:14.883 "name": "Nvme1", 00:09:14.883 "trtype": "tcp", 00:09:14.883 "traddr": "10.0.0.2", 00:09:14.883 "adrfam": "ipv4", 00:09:14.884 "trsvcid": "4420", 00:09:14.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:14.884 "hdgst": false, 00:09:14.884 "ddgst": false 00:09:14.884 }, 00:09:14.884 "method": "bdev_nvme_attach_controller" 00:09:14.884 }' 00:09:14.884 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:14.884 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@556 -- # jq . 00:09:14.884 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:14.884 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:14.884 "params": { 00:09:14.884 "name": "Nvme1", 00:09:14.884 "trtype": "tcp", 00:09:14.884 "traddr": "10.0.0.2", 00:09:14.884 "adrfam": "ipv4", 00:09:14.884 "trsvcid": "4420", 00:09:14.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:14.884 "hdgst": false, 00:09:14.884 "ddgst": false 00:09:14.884 }, 00:09:14.884 "method": "bdev_nvme_attach_controller" 00:09:14.884 }' 00:09:14.884 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:14.884 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:14.884 "params": { 00:09:14.884 "name": "Nvme1", 00:09:14.884 "trtype": "tcp", 00:09:14.884 "traddr": "10.0.0.2", 00:09:14.884 "adrfam": "ipv4", 00:09:14.884 "trsvcid": "4420", 00:09:14.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:14.884 "hdgst": false, 00:09:14.884 "ddgst": false 00:09:14.884 }, 00:09:14.884 "method": "bdev_nvme_attach_controller" 00:09:14.884 }' 00:09:14.884 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@557 -- # IFS=, 00:09:14.884 11:55:02 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:14.884 "params": { 00:09:14.884 "name": "Nvme1", 00:09:14.884 "trtype": "tcp", 00:09:14.884 "traddr": "10.0.0.2", 00:09:14.884 "adrfam": "ipv4", 00:09:14.884 "trsvcid": "4420", 00:09:14.884 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:14.884 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:14.884 "hdgst": false, 00:09:14.884 "ddgst": false 00:09:14.884 }, 00:09:14.884 "method": "bdev_nvme_attach_controller" 00:09:14.884 }' 00:09:14.884 [2024-07-25 11:55:02.084554] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:09:14.884 [2024-07-25 11:55:02.084605] [ DPDK EAL parameters: bdevperf -c 0x10 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:09:14.884 [2024-07-25 11:55:02.086851] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:09:14.884 [2024-07-25 11:55:02.086889] [ DPDK EAL parameters: bdevperf -c 0x20 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk2 --proc-type=auto ] 00:09:14.884 [2024-07-25 11:55:02.086908] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:09:14.884 [2024-07-25 11:55:02.086948] [ DPDK EAL parameters: bdevperf -c 0x80 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk4 --proc-type=auto ] 00:09:14.884 [2024-07-25 11:55:02.089200] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:09:14.884 [2024-07-25 11:55:02.089245] [ DPDK EAL parameters: bdevperf -c 0x40 -m 256 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk3 --proc-type=auto ] 00:09:14.884 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.144 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.144 [2024-07-25 11:55:02.261148] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.144 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.144 [2024-07-25 11:55:02.339122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:09:15.144 [2024-07-25 11:55:02.360456] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.403 EAL: No free 2048 kB hugepages reported on node 1 00:09:15.403 [2024-07-25 11:55:02.436486] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:09:15.403 [2024-07-25 11:55:02.461490] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.403 [2024-07-25 11:55:02.522107] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:15.403 [2024-07-25 11:55:02.553175] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:09:15.403 [2024-07-25 11:55:02.597783] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:09:15.662 Running I/O for 1 seconds... 00:09:15.662 Running I/O for 1 seconds... 00:09:15.662 Running I/O for 1 seconds... 00:09:15.662 Running I/O for 1 seconds... 00:09:16.617 00:09:16.617 Latency(us) 00:09:16.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.617 Job: Nvme1n1 (Core Mask 0x20, workload: read, depth: 128, IO size: 4096) 00:09:16.617 Nvme1n1 : 1.01 10307.40 40.26 0.00 0.00 12367.35 3177.07 22681.15 00:09:16.617 =================================================================================================================== 00:09:16.617 Total : 10307.40 40.26 0.00 0.00 12367.35 3177.07 22681.15 00:09:16.617 00:09:16.617 Latency(us) 00:09:16.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.617 Job: Nvme1n1 (Core Mask 0x10, workload: write, depth: 128, IO size: 4096) 00:09:16.617 Nvme1n1 : 1.01 11020.68 43.05 0.00 0.00 11561.23 3462.01 29861.62 00:09:16.617 =================================================================================================================== 00:09:16.617 Total : 11020.68 43.05 0.00 0.00 11561.23 3462.01 29861.62 00:09:16.617 00:09:16.617 Latency(us) 00:09:16.617 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.617 Job: Nvme1n1 (Core Mask 0x80, workload: unmap, depth: 128, IO size: 4096) 00:09:16.617 Nvme1n1 : 1.01 7267.17 28.39 0.00 0.00 17548.65 4274.09 36928.11 00:09:16.617 =================================================================================================================== 00:09:16.617 Total : 7267.17 28.39 0.00 0.00 17548.65 4274.09 36928.11 00:09:16.877 00:09:16.877 Latency(us) 00:09:16.877 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:16.877 Job: Nvme1n1 (Core Mask 0x40, workload: flush, depth: 128, IO size: 4096) 00:09:16.877 Nvme1n1 : 1.00 245617.37 959.44 0.00 0.00 519.41 213.70 644.67 00:09:16.877 =================================================================================================================== 00:09:16.877 Total : 245617.37 959.44 0.00 0.00 519.41 213.70 644.67 00:09:16.877 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@38 -- # wait 207706 00:09:16.877 11:55:03 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@39 -- # wait 207708 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@40 -- # wait 207711 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@42 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@44 -- # trap - SIGINT SIGTERM EXIT 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- target/bdev_io_wait.sh@46 -- # nvmftestfini 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@117 -- # sync 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@120 -- # set +e 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:17.137 rmmod nvme_tcp 00:09:17.137 rmmod nvme_fabrics 00:09:17.137 rmmod nvme_keyring 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@124 -- # set -e 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@125 -- # return 0 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@489 -- # '[' -n 207642 ']' 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@490 -- # killprocess 207642 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@948 -- # '[' -z 207642 ']' 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@952 -- # kill -0 207642 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # uname 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 207642 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@966 -- # echo 'killing process with pid 207642' 00:09:17.137 killing process with pid 207642 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@967 -- # kill 207642 00:09:17.137 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@972 -- # wait 207642 00:09:17.397 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:17.397 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:17.397 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:17.397 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:17.397 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:17.397 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:17.398 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:17.398 11:55:04 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.308 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:19.308 00:09:19.308 real 0m11.256s 00:09:19.308 user 0m20.418s 00:09:19.308 sys 0m5.719s 00:09:19.308 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:19.308 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_bdev_io_wait -- common/autotest_common.sh@10 -- # set +x 00:09:19.308 ************************************ 00:09:19.308 END TEST nvmf_bdev_io_wait 00:09:19.308 ************************************ 00:09:19.308 11:55:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:19.308 11:55:06 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@30 -- # run_test nvmf_queue_depth /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:19.308 11:55:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:19.308 11:55:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:19.308 11:55:06 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:19.568 ************************************ 00:09:19.568 START TEST nvmf_queue_depth 00:09:19.568 ************************************ 00:09:19.568 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/queue_depth.sh --transport=tcp 00:09:19.568 * Looking for test storage... 00:09:19.568 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:19.568 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:19.568 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # uname -s 00:09:19.568 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:19.568 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:19.568 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:19.568 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:19.568 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:19.568 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:19.568 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:19.568 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@5 -- # export PATH 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@47 -- # : 0 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@14 -- # MALLOC_BDEV_SIZE=64 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@15 -- # MALLOC_BLOCK_SIZE=512 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@17 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@19 -- # nvmftestinit 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@285 -- # xtrace_disable 00:09:19.569 11:55:06 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # pci_devs=() 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # net_devs=() 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # e810=() 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@296 -- # local -ga e810 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # x722=() 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@297 -- # local -ga x722 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # mlx=() 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@298 -- # local -ga mlx 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:24.853 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:24.854 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:24.854 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:24.854 Found net devices under 0000:86:00.0: cvl_0_0 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:24.854 Found net devices under 0000:86:00.1: cvl_0_1 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@414 -- # is_hw=yes 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:24.854 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:24.854 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.203 ms 00:09:24.854 00:09:24.854 --- 10.0.0.2 ping statistics --- 00:09:24.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.854 rtt min/avg/max/mdev = 0.203/0.203/0.203/0.000 ms 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:24.854 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:24.854 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.270 ms 00:09:24.854 00:09:24.854 --- 10.0.0.1 ping statistics --- 00:09:24.854 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:24.854 rtt min/avg/max/mdev = 0.270/0.270/0.270/0.000 ms 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@422 -- # return 0 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:24.854 11:55:11 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:24.854 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@21 -- # nvmfappstart -m 0x2 00:09:24.854 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:24.854 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:24.854 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.854 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@481 -- # nvmfpid=211606 00:09:24.854 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@482 -- # waitforlisten 211606 00:09:24.854 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:24.854 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 211606 ']' 00:09:24.854 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:24.854 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:24.854 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:24.854 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:24.854 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:24.854 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:24.854 [2024-07-25 11:55:12.071894] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:09:24.854 [2024-07-25 11:55:12.071939] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:24.854 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.114 [2024-07-25 11:55:12.131144] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.114 [2024-07-25 11:55:12.210751] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:25.114 [2024-07-25 11:55:12.210786] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:25.114 [2024-07-25 11:55:12.210793] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:25.114 [2024-07-25 11:55:12.210799] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:25.114 [2024-07-25 11:55:12.210804] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:25.114 [2024-07-25 11:55:12.210821] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:25.685 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:25.685 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:25.685 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:25.685 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:25.685 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.685 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:25.685 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:09:25.685 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.685 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.685 [2024-07-25 11:55:12.909729] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:25.685 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.685 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@24 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:09:25.685 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.685 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.946 Malloc0 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@25 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.946 [2024-07-25 11:55:12.976377] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 1024 -o 4096 -w verify -t 10 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@30 -- # bdevperf_pid=211734 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@32 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@33 -- # waitforlisten 211734 /var/tmp/bdevperf.sock 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@829 -- # '[' -z 211734 ']' 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:09:25.946 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:25.946 11:55:12 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:25.946 [2024-07-25 11:55:13.012192] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:09:25.946 [2024-07-25 11:55:13.012235] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid211734 ] 00:09:25.946 EAL: No free 2048 kB hugepages reported on node 1 00:09:25.946 [2024-07-25 11:55:13.060507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:25.946 [2024-07-25 11:55:13.133927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:26.884 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:26.884 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@862 -- # return 0 00:09:26.884 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@34 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:09:26.884 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:26.884 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:26.884 NVMe0n1 00:09:26.884 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:26.884 11:55:13 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:09:26.884 Running I/O for 10 seconds... 00:09:36.917 00:09:36.917 Latency(us) 00:09:36.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:36.917 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 1024, IO size: 4096) 00:09:36.917 Verification LBA range: start 0x0 length 0x4000 00:09:36.917 NVMe0n1 : 10.11 12037.89 47.02 0.00 0.00 84432.20 22453.20 73856.22 00:09:36.917 =================================================================================================================== 00:09:36.917 Total : 12037.89 47.02 0.00 0.00 84432.20 22453.20 73856.22 00:09:36.917 0 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@39 -- # killprocess 211734 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 211734 ']' 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 211734 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 211734 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 211734' 00:09:37.176 killing process with pid 211734 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 211734 00:09:37.176 Received shutdown signal, test time was about 10.000000 seconds 00:09:37.176 00:09:37.176 Latency(us) 00:09:37.176 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:09:37.176 =================================================================================================================== 00:09:37.176 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 211734 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- target/queue_depth.sh@43 -- # nvmftestfini 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@117 -- # sync 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@120 -- # set +e 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:37.176 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:37.176 rmmod nvme_tcp 00:09:37.176 rmmod nvme_fabrics 00:09:37.436 rmmod nvme_keyring 00:09:37.436 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:37.436 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@124 -- # set -e 00:09:37.436 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@125 -- # return 0 00:09:37.436 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@489 -- # '[' -n 211606 ']' 00:09:37.436 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@490 -- # killprocess 211606 00:09:37.436 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@948 -- # '[' -z 211606 ']' 00:09:37.436 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@952 -- # kill -0 211606 00:09:37.436 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # uname 00:09:37.436 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:09:37.436 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 211606 00:09:37.436 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:09:37.436 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:09:37.436 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@966 -- # echo 'killing process with pid 211606' 00:09:37.436 killing process with pid 211606 00:09:37.436 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@967 -- # kill 211606 00:09:37.436 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@972 -- # wait 211606 00:09:37.695 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:37.695 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:37.695 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:37.695 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:37.695 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:37.695 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:37.695 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:37.695 11:55:24 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.605 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:39.605 00:09:39.605 real 0m20.190s 00:09:39.605 user 0m24.863s 00:09:39.605 sys 0m5.558s 00:09:39.606 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:39.606 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_queue_depth -- common/autotest_common.sh@10 -- # set +x 00:09:39.606 ************************************ 00:09:39.606 END TEST nvmf_queue_depth 00:09:39.606 ************************************ 00:09:39.606 11:55:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:39.606 11:55:26 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@31 -- # run_test nvmf_target_multipath /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:39.606 11:55:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:39.606 11:55:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:39.606 11:55:26 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:39.606 ************************************ 00:09:39.606 START TEST nvmf_target_multipath 00:09:39.606 ************************************ 00:09:39.606 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multipath.sh --transport=tcp 00:09:39.866 * Looking for test storage... 00:09:39.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # uname -s 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@5 -- # export PATH 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@47 -- # : 0 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@11 -- # MALLOC_BDEV_SIZE=64 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@43 -- # nvmftestinit 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@285 -- # xtrace_disable 00:09:39.866 11:55:26 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # pci_devs=() 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # net_devs=() 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # e810=() 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@296 -- # local -ga e810 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # x722=() 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@297 -- # local -ga x722 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # mlx=() 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@298 -- # local -ga mlx 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:45.150 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:45.150 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:45.150 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:45.151 Found net devices under 0000:86:00.0: cvl_0_0 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:45.151 Found net devices under 0000:86:00.1: cvl_0_1 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@414 -- # is_hw=yes 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:45.151 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:45.151 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:09:45.151 00:09:45.151 --- 10.0.0.2 ping statistics --- 00:09:45.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.151 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:45.151 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:45.151 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.291 ms 00:09:45.151 00:09:45.151 --- 10.0.0.1 ping statistics --- 00:09:45.151 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:45.151 rtt min/avg/max/mdev = 0.291/0.291/0.291/0.000 ms 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@422 -- # return 0 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@45 -- # '[' -z ']' 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@46 -- # echo 'only one NIC for nvmf test' 00:09:45.151 only one NIC for nvmf test 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@47 -- # nvmftestfini 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:45.151 rmmod nvme_tcp 00:09:45.151 rmmod nvme_fabrics 00:09:45.151 rmmod nvme_keyring 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:45.151 11:55:32 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@48 -- # exit 0 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- target/multipath.sh@1 -- # nvmftestfini 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@488 -- # nvmfcleanup 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@117 -- # sync 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@120 -- # set +e 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@121 -- # for i in {1..20} 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@124 -- # set -e 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@125 -- # return 0 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@278 -- # remove_spdk_ns 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:09:47.696 00:09:47.696 real 0m7.639s 00:09:47.696 user 0m1.582s 00:09:47.696 sys 0m4.059s 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@1124 -- # xtrace_disable 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_target_multipath -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 ************************************ 00:09:47.696 END TEST nvmf_target_multipath 00:09:47.696 ************************************ 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@32 -- # run_test nvmf_zcopy /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:09:47.696 ************************************ 00:09:47.696 START TEST nvmf_zcopy 00:09:47.696 ************************************ 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh --transport=tcp 00:09:47.696 * Looking for test storage... 00:09:47.696 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # uname -s 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.696 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@5 -- # export PATH 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@47 -- # : 0 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@51 -- # have_pci_nics=0 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@12 -- # nvmftestinit 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@448 -- # prepare_net_devs 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@410 -- # local -g is_hw=no 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@412 -- # remove_spdk_ns 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@285 -- # xtrace_disable 00:09:47.697 11:55:34 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # pci_devs=() 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@291 -- # local -a pci_devs 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # pci_net_devs=() 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # pci_drivers=() 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@293 -- # local -A pci_drivers 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # net_devs=() 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@295 -- # local -ga net_devs 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # e810=() 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@296 -- # local -ga e810 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # x722=() 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@297 -- # local -ga x722 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # mlx=() 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@298 -- # local -ga mlx 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:09:52.983 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:09:52.984 Found 0000:86:00.0 (0x8086 - 0x159b) 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:09:52.984 Found 0000:86:00.1 (0x8086 - 0x159b) 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:09:52.984 Found net devices under 0000:86:00.0: cvl_0_0 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@390 -- # [[ up == up ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:09:52.984 Found net devices under 0000:86:00.1: cvl_0_1 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@414 -- # is_hw=yes 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:09:52.984 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:09:52.984 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:09:52.984 00:09:52.984 --- 10.0.0.2 ping statistics --- 00:09:52.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.984 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:09:52.984 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:09:52.984 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.406 ms 00:09:52.984 00:09:52.984 --- 10.0.0.1 ping statistics --- 00:09:52.984 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:09:52.984 rtt min/avg/max/mdev = 0.406/0.406/0.406/0.000 ms 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@422 -- # return 0 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@13 -- # nvmfappstart -m 0x2 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@722 -- # xtrace_disable 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.984 11:55:39 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@481 -- # nvmfpid=220596 00:09:52.984 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@482 -- # waitforlisten 220596 00:09:52.984 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@829 -- # '[' -z 220596 ']' 00:09:52.985 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:52.985 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@834 -- # local max_retries=100 00:09:52.985 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:52.985 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:52.985 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@838 -- # xtrace_disable 00:09:52.985 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:09:52.985 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:52.985 [2024-07-25 11:55:40.048530] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:09:52.985 [2024-07-25 11:55:40.048582] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:09:52.985 EAL: No free 2048 kB hugepages reported on node 1 00:09:52.985 [2024-07-25 11:55:40.106293] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:52.985 [2024-07-25 11:55:40.179151] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:09:52.985 [2024-07-25 11:55:40.179189] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:09:52.985 [2024-07-25 11:55:40.179196] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:09:52.985 [2024-07-25 11:55:40.179202] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:09:52.985 [2024-07-25 11:55:40.179207] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:09:52.985 [2024-07-25 11:55:40.179240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@862 -- # return 0 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@728 -- # xtrace_disable 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@15 -- # '[' tcp '!=' tcp ']' 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@22 -- # rpc_cmd nvmf_create_transport -t tcp -o -c 0 --zcopy 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.925 [2024-07-25 11:55:40.886168] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.925 [2024-07-25 11:55:40.902286] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@29 -- # rpc_cmd bdev_malloc_create 32 4096 -b malloc0 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.925 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.925 malloc0 00:09:53.926 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.926 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:09:53.926 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:09:53.926 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:09:53.926 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:09:53.926 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -t 10 -q 128 -w verify -o 8192 00:09:53.926 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@33 -- # gen_nvmf_target_json 00:09:53.926 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:09:53.926 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:09:53.926 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:09:53.926 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:09:53.926 { 00:09:53.926 "params": { 00:09:53.926 "name": "Nvme$subsystem", 00:09:53.926 "trtype": "$TEST_TRANSPORT", 00:09:53.926 "traddr": "$NVMF_FIRST_TARGET_IP", 00:09:53.926 "adrfam": "ipv4", 00:09:53.926 "trsvcid": "$NVMF_PORT", 00:09:53.926 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:09:53.926 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:09:53.926 "hdgst": ${hdgst:-false}, 00:09:53.926 "ddgst": ${ddgst:-false} 00:09:53.926 }, 00:09:53.926 "method": "bdev_nvme_attach_controller" 00:09:53.926 } 00:09:53.926 EOF 00:09:53.926 )") 00:09:53.926 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:09:53.926 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:09:53.926 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:09:53.926 11:55:40 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:09:53.926 "params": { 00:09:53.926 "name": "Nvme1", 00:09:53.926 "trtype": "tcp", 00:09:53.926 "traddr": "10.0.0.2", 00:09:53.926 "adrfam": "ipv4", 00:09:53.926 "trsvcid": "4420", 00:09:53.926 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:09:53.926 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:09:53.926 "hdgst": false, 00:09:53.926 "ddgst": false 00:09:53.926 }, 00:09:53.926 "method": "bdev_nvme_attach_controller" 00:09:53.926 }' 00:09:53.926 [2024-07-25 11:55:40.989913] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:09:53.926 [2024-07-25 11:55:40.989955] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid220631 ] 00:09:53.926 EAL: No free 2048 kB hugepages reported on node 1 00:09:53.926 [2024-07-25 11:55:41.044457] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:09:53.926 [2024-07-25 11:55:41.118052] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:09:54.186 Running I/O for 10 seconds... 00:10:04.171 00:10:04.171 Latency(us) 00:10:04.171 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:04.171 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 8192) 00:10:04.171 Verification LBA range: start 0x0 length 0x1000 00:10:04.171 Nvme1n1 : 10.01 7579.31 59.21 0.00 0.00 16846.00 726.59 50605.19 00:10:04.171 =================================================================================================================== 00:10:04.171 Total : 7579.31 59.21 0.00 0.00 16846.00 726.59 50605.19 00:10:04.432 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@39 -- # perfpid=222458 00:10:04.432 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@41 -- # xtrace_disable 00:10:04.432 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:04.432 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -t 5 -q 128 -w randrw -M 50 -o 8192 00:10:04.432 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@37 -- # gen_nvmf_target_json 00:10:04.432 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # config=() 00:10:04.432 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@532 -- # local subsystem config 00:10:04.432 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:10:04.432 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:10:04.432 { 00:10:04.432 "params": { 00:10:04.432 "name": "Nvme$subsystem", 00:10:04.432 "trtype": "$TEST_TRANSPORT", 00:10:04.432 "traddr": "$NVMF_FIRST_TARGET_IP", 00:10:04.432 "adrfam": "ipv4", 00:10:04.432 "trsvcid": "$NVMF_PORT", 00:10:04.432 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:10:04.432 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:10:04.432 "hdgst": ${hdgst:-false}, 00:10:04.432 "ddgst": ${ddgst:-false} 00:10:04.432 }, 00:10:04.432 "method": "bdev_nvme_attach_controller" 00:10:04.432 } 00:10:04.432 EOF 00:10:04.432 )") 00:10:04.432 [2024-07-25 11:55:51.493026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.493064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@554 -- # cat 00:10:04.432 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@556 -- # jq . 00:10:04.432 [2024-07-25 11:55:51.501012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.501023] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@557 -- # IFS=, 00:10:04.432 11:55:51 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:10:04.432 "params": { 00:10:04.432 "name": "Nvme1", 00:10:04.432 "trtype": "tcp", 00:10:04.432 "traddr": "10.0.0.2", 00:10:04.432 "adrfam": "ipv4", 00:10:04.432 "trsvcid": "4420", 00:10:04.432 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:10:04.432 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:10:04.432 "hdgst": false, 00:10:04.432 "ddgst": false 00:10:04.432 }, 00:10:04.432 "method": "bdev_nvme_attach_controller" 00:10:04.432 }' 00:10:04.432 [2024-07-25 11:55:51.509027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.509037] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 [2024-07-25 11:55:51.517056] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.517065] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 [2024-07-25 11:55:51.525075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.525084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 [2024-07-25 11:55:51.532041] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:10:04.432 [2024-07-25 11:55:51.532086] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid222458 ] 00:10:04.432 [2024-07-25 11:55:51.533097] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.533107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 [2024-07-25 11:55:51.541116] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.541125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 [2024-07-25 11:55:51.549136] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.549145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 EAL: No free 2048 kB hugepages reported on node 1 00:10:04.432 [2024-07-25 11:55:51.557159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.557168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 [2024-07-25 11:55:51.565180] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.565190] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 [2024-07-25 11:55:51.573201] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.573210] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 [2024-07-25 11:55:51.581221] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.581231] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 [2024-07-25 11:55:51.585800] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:04.432 [2024-07-25 11:55:51.589245] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.589256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 [2024-07-25 11:55:51.597267] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.597279] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 [2024-07-25 11:55:51.605288] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.605297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 [2024-07-25 11:55:51.613311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.613320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 [2024-07-25 11:55:51.621331] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.621341] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 [2024-07-25 11:55:51.629357] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.629379] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 [2024-07-25 11:55:51.637376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.637391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.432 [2024-07-25 11:55:51.645396] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.432 [2024-07-25 11:55:51.645405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.433 [2024-07-25 11:55:51.653417] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.433 [2024-07-25 11:55:51.653426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.433 [2024-07-25 11:55:51.661026] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:04.433 [2024-07-25 11:55:51.661442] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.433 [2024-07-25 11:55:51.661453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.433 [2024-07-25 11:55:51.669462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.433 [2024-07-25 11:55:51.669471] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.433 [2024-07-25 11:55:51.677495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.433 [2024-07-25 11:55:51.677514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.692 [2024-07-25 11:55:51.685507] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.692 [2024-07-25 11:55:51.685519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.692 [2024-07-25 11:55:51.693527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.692 [2024-07-25 11:55:51.693539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.692 [2024-07-25 11:55:51.701548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.692 [2024-07-25 11:55:51.701559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.692 [2024-07-25 11:55:51.709568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.692 [2024-07-25 11:55:51.709578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.692 [2024-07-25 11:55:51.717592] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.692 [2024-07-25 11:55:51.717603] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.692 [2024-07-25 11:55:51.725615] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.692 [2024-07-25 11:55:51.725627] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.692 [2024-07-25 11:55:51.733632] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.733641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.741654] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.741663] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.749690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.749707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.757719] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.757737] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.765729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.765741] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.773753] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.773767] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.781769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.781778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.789790] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.789798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.797811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.797819] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.805832] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.805841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.813856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.813865] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.821883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.821897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.829907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.829920] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.837930] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.837942] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.845948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.845957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.853978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.853995] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.861990] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.861999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 Running I/O for 5 seconds... 00:10:04.693 [2024-07-25 11:55:51.870016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.870030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.893586] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.893605] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.905457] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.905475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.913216] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.913234] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.922940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.922958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.693 [2024-07-25 11:55:51.932841] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.693 [2024-07-25 11:55:51.932858] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:51.942913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:51.942932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:51.952950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:51.952967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:51.960834] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:51.960859] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:51.970703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:51.970721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:51.979472] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:51.979489] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:51.988973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:51.988992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:51.997730] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:51.997748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.006190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.006209] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.015322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.015342] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.024073] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.024092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.033283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.033302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.041795] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.041813] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.050074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.050092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.057493] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.057511] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.067459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.067478] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.077138] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.077156] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.085561] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.085579] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.094139] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.094157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.101602] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.101621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.111765] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.111783] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.122115] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.122133] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.131259] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.131282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.139802] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.139820] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.148856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.148875] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.156481] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.156499] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.164826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.164843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.174306] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.174325] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.183740] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.183758] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.192605] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.192623] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:04.953 [2024-07-25 11:55:52.200227] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:04.953 [2024-07-25 11:55:52.200245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.210170] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.210189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.219616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.219635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.228843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.228862] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.237568] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.237586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.246368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.246386] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.255377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.255395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.265397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.265414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.274511] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.274530] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.282938] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.282956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.289924] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.289943] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.300663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.300685] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.310662] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.310680] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.318143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.318161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.327339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.327357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.337599] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.337617] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.346845] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.346863] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.355330] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.355348] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.364650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.364668] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.372364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.372381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.382435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.382453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.390880] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.390897] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.399326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.399344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.408640] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.408657] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.417409] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.417426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.426101] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.426118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.434669] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.434687] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.443072] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.443089] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.451927] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.451945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.213 [2024-07-25 11:55:52.461061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.213 [2024-07-25 11:55:52.461079] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.467995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.468018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.478214] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.478232] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.486817] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.486834] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.494283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.494300] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.505135] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.505154] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.514060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.514077] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.528554] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.528572] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.538786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.538805] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.546064] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.546082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.556060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.556078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.563353] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.563372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.573236] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.573255] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.583027] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.583051] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.592797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.592814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.600779] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.600797] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.610252] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.610271] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.617007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.617025] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.627582] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.627599] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.636509] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.636527] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.645702] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.645719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.654479] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.654497] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.663163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.663180] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.670079] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.670097] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.680159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.680176] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.687065] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.687082] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.697074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.697091] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.706022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.706040] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.473 [2024-07-25 11:55:52.714862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.473 [2024-07-25 11:55:52.714880] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.723344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.723363] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.732086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.732106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.741786] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.741804] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.750405] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.750423] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.759012] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.759030] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.767567] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.767585] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.777344] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.777362] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.785956] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.785973] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.794548] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.794566] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.803302] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.803320] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.812265] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.812282] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.821024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.821041] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.830063] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.830081] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.838855] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.838872] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.846297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.846314] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.856319] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.856336] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.864631] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.864649] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.875107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.875124] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.883800] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.883817] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.894226] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.894244] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.903659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.903677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.912523] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.912541] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.922469] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.922486] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.931431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.931449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.940154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.940172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.950455] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.950472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.959597] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.959614] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.968200] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.968217] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.733 [2024-07-25 11:55:52.977946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.733 [2024-07-25 11:55:52.977968] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:52.987559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:52.987578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:52.997335] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:52.997353] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.006594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.006612] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.015743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.015761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.023564] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.023586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.032953] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.032971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.041769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.041787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.050620] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.050637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.058435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.058452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.069926] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.069945] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.079069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.079088] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.088015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.088033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.095260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.095278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.105034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.105057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.111734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.111751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.121616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.121635] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.129190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.129207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.142569] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.142586] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.152995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.153012] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.162172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.162189] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.171181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.171199] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.182909] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.182926] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.191781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.191799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.199659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.199676] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.210889] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.210906] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.222107] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.222125] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.231105] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.231123] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:05.993 [2024-07-25 11:55:53.239406] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:05.993 [2024-07-25 11:55:53.239425] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.248882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.248901] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.256310] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.256327] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.268362] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.268381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.276260] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.276278] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.286698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.286716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.294964] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.294982] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.304431] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.304449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.313203] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.313220] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.322024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.322047] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.335051] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.335073] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.344695] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.344712] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.353160] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.353178] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.368912] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.368930] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.380326] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.380343] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.388980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.388998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.398754] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.398773] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.408141] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.408160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.416819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.416838] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.426574] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.426592] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.435771] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.435789] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.444866] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.444884] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.453342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.453360] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.462077] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.462095] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.471034] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.471060] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.479502] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.479520] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.489174] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.489193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.253 [2024-07-25 11:55:53.498356] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.253 [2024-07-25 11:55:53.498375] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.512 [2024-07-25 11:55:53.507145] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.512 [2024-07-25 11:55:53.507165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.512 [2024-07-25 11:55:53.513989] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.512 [2024-07-25 11:55:53.514011] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.512 [2024-07-25 11:55:53.526663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.512 [2024-07-25 11:55:53.526681] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.512 [2024-07-25 11:55:53.536936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.512 [2024-07-25 11:55:53.536955] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.512 [2024-07-25 11:55:53.544127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.512 [2024-07-25 11:55:53.544145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.512 [2024-07-25 11:55:53.553119] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.512 [2024-07-25 11:55:53.553137] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.512 [2024-07-25 11:55:53.563527] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.512 [2024-07-25 11:55:53.563545] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.512 [2024-07-25 11:55:53.572555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.512 [2024-07-25 11:55:53.572574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.512 [2024-07-25 11:55:53.580016] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.512 [2024-07-25 11:55:53.580035] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.512 [2024-07-25 11:55:53.591811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.512 [2024-07-25 11:55:53.591829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.512 [2024-07-25 11:55:53.601535] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.512 [2024-07-25 11:55:53.601553] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.512 [2024-07-25 11:55:53.610040] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.512 [2024-07-25 11:55:53.610063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.512 [2024-07-25 11:55:53.619117] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.512 [2024-07-25 11:55:53.619136] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.512 [2024-07-25 11:55:53.626665] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.512 [2024-07-25 11:55:53.626683] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.512 [2024-07-25 11:55:53.638690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.512 [2024-07-25 11:55:53.638708] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.512 [2024-07-25 11:55:53.648885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.512 [2024-07-25 11:55:53.648903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.512 [2024-07-25 11:55:53.656722] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.512 [2024-07-25 11:55:53.656739] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.512 [2024-07-25 11:55:53.667432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.512 [2024-07-25 11:55:53.667450] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.513 [2024-07-25 11:55:53.676277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.513 [2024-07-25 11:55:53.676295] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.513 [2024-07-25 11:55:53.684152] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.513 [2024-07-25 11:55:53.684169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.513 [2024-07-25 11:55:53.693282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.513 [2024-07-25 11:55:53.693305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.513 [2024-07-25 11:55:53.702147] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.513 [2024-07-25 11:55:53.702164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.513 [2024-07-25 11:55:53.711563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.513 [2024-07-25 11:55:53.711580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.513 [2024-07-25 11:55:53.720750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.513 [2024-07-25 11:55:53.720768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.513 [2024-07-25 11:55:53.730434] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.513 [2024-07-25 11:55:53.730452] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.513 [2024-07-25 11:55:53.739144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.513 [2024-07-25 11:55:53.739162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.513 [2024-07-25 11:55:53.748007] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.513 [2024-07-25 11:55:53.748024] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.513 [2024-07-25 11:55:53.760074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.513 [2024-07-25 11:55:53.760092] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.771 [2024-07-25 11:55:53.770563] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.771 [2024-07-25 11:55:53.770581] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.771 [2024-07-25 11:55:53.779978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.771 [2024-07-25 11:55:53.779996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.771 [2024-07-25 11:55:53.787603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.771 [2024-07-25 11:55:53.787621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.771 [2024-07-25 11:55:53.797571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.771 [2024-07-25 11:55:53.797589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.771 [2024-07-25 11:55:53.805969] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.771 [2024-07-25 11:55:53.805987] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.771 [2024-07-25 11:55:53.816401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.771 [2024-07-25 11:55:53.816420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.771 [2024-07-25 11:55:53.823929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.771 [2024-07-25 11:55:53.823947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.771 [2024-07-25 11:55:53.832751] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.771 [2024-07-25 11:55:53.832768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.771 [2024-07-25 11:55:53.840874] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.771 [2024-07-25 11:55:53.840891] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:53.854397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:53.854414] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:53.865894] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:53.865911] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:53.873830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:53.873854] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:53.883811] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:53.883829] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:53.892418] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:53.892435] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:53.903427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:53.903445] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:53.913086] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:53.913104] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:53.920604] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:53.920621] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:53.929287] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:53.929305] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:53.939647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:53.939664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:53.946979] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:53.946996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:53.957181] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:53.957198] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:53.965929] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:53.965947] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:53.974463] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:53.974480] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:53.983590] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:53.983608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:53.992575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:53.992593] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:54.000144] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:54.000162] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:54.013291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:54.013310] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:06.772 [2024-07-25 11:55:54.021339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:06.772 [2024-07-25 11:55:54.021358] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.031 [2024-07-25 11:55:54.029039] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.031 [2024-07-25 11:55:54.029063] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.031 [2024-07-25 11:55:54.040459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.031 [2024-07-25 11:55:54.040477] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.031 [2024-07-25 11:55:54.047459] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.031 [2024-07-25 11:55:54.047476] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.031 [2024-07-25 11:55:54.056342] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.031 [2024-07-25 11:55:54.056359] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.031 [2024-07-25 11:55:54.064729] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.064748] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.074182] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.074200] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.083067] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.083084] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.091635] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.091652] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.100879] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.100896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.109750] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.109768] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.118954] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.118971] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.128522] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.128540] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.137430] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.137448] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.146555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.146573] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.156151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.156169] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.164995] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.165013] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.172883] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.172900] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.181743] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.181761] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.190899] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.190917] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.200241] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.200259] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.207882] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.207899] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.218165] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.218182] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.225649] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.225666] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.237284] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.237302] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.248718] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.248736] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.257571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.257589] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.264982] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.264999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.032 [2024-07-25 11:55:54.276734] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.032 [2024-07-25 11:55:54.276751] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.287231] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.287250] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.296486] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.296503] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.305389] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.305407] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.314118] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.314135] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.322980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.322999] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.331451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.331469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.340820] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.340837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.349656] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.349673] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.358184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.358202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.367234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.367251] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.376760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.376778] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.385946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.385964] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.394933] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.394950] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.403441] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.403458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.413684] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.413701] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.420876] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.420894] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.432950] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.432967] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.443915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.443932] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.452514] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.452532] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.459560] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.459578] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.470125] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.470143] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.479053] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.479070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.487826] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.487843] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.501001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.501018] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.510566] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.510583] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.522190] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.522207] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.530773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.530790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.291 [2024-07-25 11:55:54.539028] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.291 [2024-07-25 11:55:54.539052] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.548058] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.548078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.556659] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.556677] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.565453] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.565470] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.574462] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.574481] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.582015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.582033] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.593824] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.593841] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.604503] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.604521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.612129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.612147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.623559] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.623577] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.633500] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.633519] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.642262] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.642281] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.649992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.650009] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.659875] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.659892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.668534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.668552] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.677650] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.677667] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.685451] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.685469] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.695435] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.695453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.704983] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.705000] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.716163] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.716181] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.724129] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.724146] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.734082] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.734100] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.742031] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.742057] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.749752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.749771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.759589] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.759608] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.768819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.768837] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.776237] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.776256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.785616] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.785634] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.568 [2024-07-25 11:55:54.796482] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.568 [2024-07-25 11:55:54.796501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.807674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.807694] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.814936] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.814956] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.826641] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.826659] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.837495] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.837514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.846024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.846050] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.855127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.855145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.864611] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.864629] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.874285] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.874303] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.883436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.883453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.892060] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.892078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.902767] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.902785] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.913229] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.913248] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.922934] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.922957] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.931838] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.931856] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.941175] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.941193] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.950282] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.950301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.964149] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.964168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.972913] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.972931] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.982871] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.982889] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:54.991891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:54.991908] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:55.002210] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:55.002228] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:55.011112] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:55.011131] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:55.020024] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:55.020048] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:55.027680] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:55.027697] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:55.037516] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:55.037535] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:55.046062] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:55.046080] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:55.053099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.840 [2024-07-25 11:55:55.053118] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.840 [2024-07-25 11:55:55.065108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.841 [2024-07-25 11:55:55.065127] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.841 [2024-07-25 11:55:55.075752] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.841 [2024-07-25 11:55:55.075771] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:07.841 [2024-07-25 11:55:55.083674] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:07.841 [2024-07-25 11:55:55.083693] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.093137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.093157] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.100908] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.100929] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.110973] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.110992] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.118575] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.118594] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.131555] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.131574] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.140696] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.140714] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.149796] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.149814] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.157341] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.157357] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.166280] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.166297] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.174600] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.174618] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.185829] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.185847] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.196099] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.196117] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.205089] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.205107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.213061] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.213078] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.222184] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.222202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.230257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.230274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.240473] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.240492] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.251878] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.251896] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.262521] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.262539] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.271143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.271161] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.278255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.100 [2024-07-25 11:55:55.278276] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.100 [2024-07-25 11:55:55.288186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.101 [2024-07-25 11:55:55.288203] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.101 [2024-07-25 11:55:55.297215] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.101 [2024-07-25 11:55:55.297233] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.101 [2024-07-25 11:55:55.309349] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.101 [2024-07-25 11:55:55.309366] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.101 [2024-07-25 11:55:55.319584] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.101 [2024-07-25 11:55:55.319601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.101 [2024-07-25 11:55:55.327354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.101 [2024-07-25 11:55:55.327371] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.101 [2024-07-25 11:55:55.334843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.101 [2024-07-25 11:55:55.334860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.101 [2024-07-25 11:55:55.345703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.101 [2024-07-25 11:55:55.345721] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.360 [2024-07-25 11:55:55.352970] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.360 [2024-07-25 11:55:55.352989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.360 [2024-07-25 11:55:55.362793] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.360 [2024-07-25 11:55:55.362812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.360 [2024-07-25 11:55:55.371303] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.360 [2024-07-25 11:55:55.371321] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.360 [2024-07-25 11:55:55.378202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.360 [2024-07-25 11:55:55.378219] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.360 [2024-07-25 11:55:55.390137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.360 [2024-07-25 11:55:55.390155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.360 [2024-07-25 11:55:55.400948] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.360 [2024-07-25 11:55:55.400966] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.360 [2024-07-25 11:55:55.408647] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.360 [2024-07-25 11:55:55.408664] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.360 [2024-07-25 11:55:55.419283] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.360 [2024-07-25 11:55:55.419301] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.360 [2024-07-25 11:55:55.427801] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.360 [2024-07-25 11:55:55.427818] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.360 [2024-07-25 11:55:55.435833] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.360 [2024-07-25 11:55:55.435850] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.360 [2024-07-25 11:55:55.443780] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.360 [2024-07-25 11:55:55.443798] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.360 [2024-07-25 11:55:55.455583] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.360 [2024-07-25 11:55:55.455601] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.360 [2024-07-25 11:55:55.464769] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.360 [2024-07-25 11:55:55.464787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.361 [2024-07-25 11:55:55.473483] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.361 [2024-07-25 11:55:55.473501] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.361 [2024-07-25 11:55:55.482742] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.361 [2024-07-25 11:55:55.482760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.361 [2024-07-25 11:55:55.491741] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.361 [2024-07-25 11:55:55.491760] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.361 [2024-07-25 11:55:55.500594] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.361 [2024-07-25 11:55:55.500611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.361 [2024-07-25 11:55:55.509151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.361 [2024-07-25 11:55:55.509168] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.361 [2024-07-25 11:55:55.518873] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.361 [2024-07-25 11:55:55.518892] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.361 [2024-07-25 11:55:55.527797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.361 [2024-07-25 11:55:55.527815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.361 [2024-07-25 11:55:55.536946] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.361 [2024-07-25 11:55:55.536963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.361 [2024-07-25 11:55:55.546185] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.361 [2024-07-25 11:55:55.546202] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.361 [2024-07-25 11:55:55.555377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.361 [2024-07-25 11:55:55.555395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.361 [2024-07-25 11:55:55.565470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.361 [2024-07-25 11:55:55.565487] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.361 [2024-07-25 11:55:55.573945] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.361 [2024-07-25 11:55:55.573963] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.361 [2024-07-25 11:55:55.582432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.361 [2024-07-25 11:55:55.582449] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.361 [2024-07-25 11:55:55.592534] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.361 [2024-07-25 11:55:55.592551] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.361 [2024-07-25 11:55:55.601447] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.361 [2024-07-25 11:55:55.601466] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.361 [2024-07-25 11:55:55.608504] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.361 [2024-07-25 11:55:55.608521] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.623663] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.623682] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.633159] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.633177] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.640278] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.640296] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.650914] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.650933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.659456] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.659475] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.668660] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.668679] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.677454] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.677472] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.686069] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.686086] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.695188] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.695205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.703872] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.703890] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.710868] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.710886] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.722075] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.722093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.730397] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.730415] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.738915] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.738933] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.748137] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.748155] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.757275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.757293] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.764433] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.764451] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.774803] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.774821] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.783975] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.783993] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.620 [2024-07-25 11:55:55.792687] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.620 [2024-07-25 11:55:55.792705] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.621 [2024-07-25 11:55:55.799542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.621 [2024-07-25 11:55:55.799559] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.621 [2024-07-25 11:55:55.809377] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.621 [2024-07-25 11:55:55.809395] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.621 [2024-07-25 11:55:55.818248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.621 [2024-07-25 11:55:55.818265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.621 [2024-07-25 11:55:55.826971] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.621 [2024-07-25 11:55:55.826988] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.621 [2024-07-25 11:55:55.835937] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.621 [2024-07-25 11:55:55.835954] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.621 [2024-07-25 11:55:55.845022] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.621 [2024-07-25 11:55:55.845039] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.621 [2024-07-25 11:55:55.854026] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.621 [2024-07-25 11:55:55.854059] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.621 [2024-07-25 11:55:55.863830] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.621 [2024-07-25 11:55:55.863849] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:55.872707] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:55.872725] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:55.881263] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:55.881280] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:55.890819] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:55.890836] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:55.899354] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:55.899372] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:55.908244] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:55.908261] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:55.917088] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:55.917106] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:55.925846] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:55.925864] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:55.938856] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:55.938873] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:55.949316] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:55.949333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:55.957327] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:55.957344] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:55.969682] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:55.969699] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:55.979785] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:55.979802] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:55.988015] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:55.988032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:55.997148] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:55.997165] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:56.008972] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:56.008989] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:56.019052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:56.019069] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:56.029239] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:56.029256] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:56.038382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:56.038399] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:56.045760] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:56.045777] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:56.055382] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:56.055400] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.880 [2024-07-25 11:55:56.064057] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.880 [2024-07-25 11:55:56.064074] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 [2024-07-25 11:55:56.072154] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-07-25 11:55:56.072172] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 [2024-07-25 11:55:56.082714] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-07-25 11:55:56.082733] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 [2024-07-25 11:55:56.090192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-07-25 11:55:56.090211] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 [2024-07-25 11:55:56.100134] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-07-25 11:55:56.100152] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 [2024-07-25 11:55:56.109218] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-07-25 11:55:56.109235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 [2024-07-25 11:55:56.117090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-07-25 11:55:56.117107] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:08.881 [2024-07-25 11:55:56.126071] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:08.881 [2024-07-25 11:55:56.126090] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.133130] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.133150] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.143603] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.143625] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.152577] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.152596] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.161401] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.161420] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.169862] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.169881] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.178925] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.178944] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.186128] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.186147] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.196860] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.196879] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.205768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.205786] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.215020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.215038] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.223768] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.223787] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.232186] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.232204] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.241255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.241274] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.250363] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.250381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.257211] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.257229] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.267885] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.267903] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.277835] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.277853] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.285171] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.285191] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.295247] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.295265] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.304388] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.304405] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.313432] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.313453] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.322166] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.322184] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.331052] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.331070] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.340520] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.340538] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.349225] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.349243] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.358126] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.358145] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.366821] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.366839] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.375655] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.375674] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.140 [2024-07-25 11:55:56.384213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.140 [2024-07-25 11:55:56.384230] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.399 [2024-07-25 11:55:56.392955] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.399 [2024-07-25 11:55:56.392974] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.399 [2024-07-25 11:55:56.402195] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.399 [2024-07-25 11:55:56.402213] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.399 [2024-07-25 11:55:56.410436] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.399 [2024-07-25 11:55:56.410454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.399 [2024-07-25 11:55:56.418439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.399 [2024-07-25 11:55:56.418457] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.399 [2024-07-25 11:55:56.428907] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.399 [2024-07-25 11:55:56.428925] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.399 [2024-07-25 11:55:56.436277] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.399 [2024-07-25 11:55:56.436294] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.399 [2024-07-25 11:55:56.447681] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.447698] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.459923] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.459941] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.468542] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.468560] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.476352] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.476370] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.485891] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.485914] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.494364] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.494381] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.502865] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.502883] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.511623] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.511641] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.520368] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.520387] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.529143] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.529160] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.536496] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.536514] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.546255] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.546272] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.553978] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.553996] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.561427] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.561444] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.571373] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.571391] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.580628] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.580646] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.589940] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.589958] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.597111] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.597128] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.610701] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.610719] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.620992] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.621010] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.628797] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.628815] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.638781] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.638799] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.400 [2024-07-25 11:55:56.647311] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.400 [2024-07-25 11:55:56.647329] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.660004] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.660026] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.670980] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.670998] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.682439] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.682458] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.690690] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.690707] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.698095] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.698113] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.709217] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.709235] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.718703] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.718720] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.727562] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.727580] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.735987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.736005] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.743202] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.743221] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.753291] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.753309] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.763248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.763266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.771619] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.771637] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.780698] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.780716] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.789248] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.789266] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.802773] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.802790] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.813376] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.813393] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.823593] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.823611] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.832691] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.832710] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.841508] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.841526] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.850794] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.850812] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.859843] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.859860] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.868571] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.868588] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.875505] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.875523] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 [2024-07-25 11:55:56.882987] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.662 [2024-07-25 11:55:56.883004] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.662 00:10:09.662 Latency(us) 00:10:09.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:10:09.663 Job: Nvme1n1 (Core Mask 0x1, workload: randrw, percentage: 50, depth: 128, IO size: 8192) 00:10:09.663 Nvme1n1 : 5.01 15416.73 120.44 0.00 0.00 8297.28 1937.59 37611.97 00:10:09.663 =================================================================================================================== 00:10:09.663 Total : 15416.73 120.44 0.00 0.00 8297.28 1937.59 37611.97 00:10:09.663 [2024-07-25 11:55:56.891001] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.663 [2024-07-25 11:55:56.891015] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.663 [2024-07-25 11:55:56.899020] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.663 [2024-07-25 11:55:56.899032] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.663 [2024-07-25 11:55:56.907050] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.663 [2024-07-25 11:55:56.907064] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:56.915074] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:56.915093] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:56.923090] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:56.923103] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:56.931108] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:56.931121] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:56.939127] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:56.939142] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:56.947151] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:56.947164] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:56.955172] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:56.955185] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:56.963192] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:56.963205] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:56.971213] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:56.971225] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:56.979234] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:56.979245] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:56.987257] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:56.987267] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:56.995275] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:56.995284] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:57.003297] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:57.003307] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:57.011322] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:57.011333] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:57.019339] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:57.019349] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:57.027359] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:57.027368] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:57.035381] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:57.035390] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:57.043404] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:57.043426] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:57.051424] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:57.051433] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:57.059445] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:57.059454] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 [2024-07-25 11:55:57.067470] subsystem.c:2058:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Requested NSID 1 already in use 00:10:09.923 [2024-07-25 11:55:57.067479] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:09.923 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/zcopy.sh: line 42: kill: (222458) - No such process 00:10:09.923 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@49 -- # wait 222458 00:10:09.923 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@52 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:10:09.923 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.923 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:09.923 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.923 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@53 -- # rpc_cmd bdev_delay_create -b malloc0 -d delay0 -r 1000000 -t 1000000 -w 1000000 -n 1000000 00:10:09.923 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.923 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:09.923 delay0 00:10:09.923 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.923 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@54 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 delay0 -n 1 00:10:09.923 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:09.923 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:09.923 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:09.923 11:55:57 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -c 0x1 -t 5 -q 64 -w randrw -M 50 -l warning -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 ns:1' 00:10:09.923 EAL: No free 2048 kB hugepages reported on node 1 00:10:09.923 [2024-07-25 11:55:57.151328] nvme_fabric.c: 295:nvme_fabric_discover_probe: *WARNING*: Skipping unsupported current discovery service or discovery service referral 00:10:16.490 Initializing NVMe Controllers 00:10:16.490 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:10:16.490 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:10:16.490 Initialization complete. Launching workers. 00:10:16.490 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 I/O completed: 320, failed: 107 00:10:16.490 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) abort submitted 396, failed to submit 31 00:10:16.490 success 211, unsuccess 185, failed 0 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@59 -- # trap - SIGINT SIGTERM EXIT 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- target/zcopy.sh@60 -- # nvmftestfini 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@117 -- # sync 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@120 -- # set +e 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:16.490 rmmod nvme_tcp 00:10:16.490 rmmod nvme_fabrics 00:10:16.490 rmmod nvme_keyring 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@124 -- # set -e 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@125 -- # return 0 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@489 -- # '[' -n 220596 ']' 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@490 -- # killprocess 220596 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@948 -- # '[' -z 220596 ']' 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@952 -- # kill -0 220596 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # uname 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 220596 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:10:16.490 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@966 -- # echo 'killing process with pid 220596' 00:10:16.490 killing process with pid 220596 00:10:16.491 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@967 -- # kill 220596 00:10:16.491 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@972 -- # wait 220596 00:10:16.491 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:16.491 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:16.491 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:16.491 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:16.491 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:16.491 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:16.491 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:16.491 11:56:03 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.399 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:18.659 00:10:18.659 real 0m31.126s 00:10:18.659 user 0m42.011s 00:10:18.659 sys 0m10.168s 00:10:18.659 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:18.659 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_zcopy -- common/autotest_common.sh@10 -- # set +x 00:10:18.659 ************************************ 00:10:18.659 END TEST nvmf_zcopy 00:10:18.659 ************************************ 00:10:18.659 11:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:18.659 11:56:05 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@33 -- # run_test nvmf_nmic /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:18.659 11:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:18.659 11:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:18.659 11:56:05 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:18.659 ************************************ 00:10:18.659 START TEST nvmf_nmic 00:10:18.659 ************************************ 00:10:18.659 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nmic.sh --transport=tcp 00:10:18.659 * Looking for test storage... 00:10:18.659 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:18.659 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:18.659 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # uname -s 00:10:18.659 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:18.659 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:18.659 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:18.659 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:18.659 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:18.659 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@5 -- # export PATH 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@47 -- # : 0 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@14 -- # nvmftestinit 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@285 -- # xtrace_disable 00:10:18.660 11:56:05 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:23.941 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:23.941 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # pci_devs=() 00:10:23.941 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:23.941 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:23.941 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:23.941 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:23.941 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:23.941 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # net_devs=() 00:10:23.941 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:23.941 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # e810=() 00:10:23.941 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@296 -- # local -ga e810 00:10:23.941 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # x722=() 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@297 -- # local -ga x722 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # mlx=() 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@298 -- # local -ga mlx 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:23.942 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:23.942 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:23.942 Found net devices under 0000:86:00.0: cvl_0_0 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:23.942 Found net devices under 0000:86:00.1: cvl_0_1 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@414 -- # is_hw=yes 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:23.942 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:24.203 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:24.203 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:24.203 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:24.203 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:24.203 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:24.203 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:24.203 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:24.203 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:24.203 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:24.203 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:10:24.203 00:10:24.203 --- 10.0.0.2 ping statistics --- 00:10:24.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.203 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:10:24.203 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:24.203 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:24.203 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.581 ms 00:10:24.203 00:10:24.203 --- 10.0.0.1 ping statistics --- 00:10:24.203 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:24.203 rtt min/avg/max/mdev = 0.581/0.581/0.581/0.000 ms 00:10:24.203 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:24.203 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@422 -- # return 0 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@15 -- # nvmfappstart -m 0xF 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@481 -- # nvmfpid=227919 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@482 -- # waitforlisten 227919 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@829 -- # '[' -z 227919 ']' 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:24.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:24.464 11:56:11 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:24.464 [2024-07-25 11:56:11.529344] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:10:24.464 [2024-07-25 11:56:11.529390] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:24.464 EAL: No free 2048 kB hugepages reported on node 1 00:10:24.464 [2024-07-25 11:56:11.590036] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:24.464 [2024-07-25 11:56:11.666120] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:24.464 [2024-07-25 11:56:11.666160] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:24.464 [2024-07-25 11:56:11.666167] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:24.464 [2024-07-25 11:56:11.666173] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:24.464 [2024-07-25 11:56:11.666178] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:24.464 [2024-07-25 11:56:11.666239] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:24.464 [2024-07-25 11:56:11.666335] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:24.464 [2024-07-25 11:56:11.666425] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:24.464 [2024-07-25 11:56:11.666426] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:25.403 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:25.403 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@862 -- # return 0 00:10:25.403 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:25.403 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:25.403 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.403 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:25.403 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:10:25.403 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.403 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.403 [2024-07-25 11:56:12.382371] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.404 Malloc0 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@21 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@22 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@23 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.404 [2024-07-25 11:56:12.434297] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@25 -- # echo 'test case1: single bdev can'\''t be used in multiple subsystems' 00:10:25.404 test case1: single bdev can't be used in multiple subsystems 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@26 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK2 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@28 -- # nmic_status=0 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc0 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.404 [2024-07-25 11:56:12.458197] bdev.c:8111:bdev_open: *ERROR*: bdev Malloc0 already claimed: type exclusive_write by module NVMe-oF Target 00:10:25.404 [2024-07-25 11:56:12.458216] subsystem.c:2087:spdk_nvmf_subsystem_add_ns_ext: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode2: bdev Malloc0 cannot be opened, error=-1 00:10:25.404 [2024-07-25 11:56:12.458223] nvmf_rpc.c:1553:nvmf_rpc_ns_paused: *ERROR*: Unable to add namespace 00:10:25.404 request: 00:10:25.404 { 00:10:25.404 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:10:25.404 "namespace": { 00:10:25.404 "bdev_name": "Malloc0", 00:10:25.404 "no_auto_visible": false 00:10:25.404 }, 00:10:25.404 "method": "nvmf_subsystem_add_ns", 00:10:25.404 "req_id": 1 00:10:25.404 } 00:10:25.404 Got JSON-RPC error response 00:10:25.404 response: 00:10:25.404 { 00:10:25.404 "code": -32602, 00:10:25.404 "message": "Invalid parameters" 00:10:25.404 } 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@29 -- # nmic_status=1 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@31 -- # '[' 1 -eq 0 ']' 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@36 -- # echo ' Adding namespace failed - expected result.' 00:10:25.404 Adding namespace failed - expected result. 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@39 -- # echo 'test case2: host connect to nvmf target in multiple paths' 00:10:25.404 test case2: host connect to nvmf target in multiple paths 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@559 -- # xtrace_disable 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:25.404 [2024-07-25 11:56:12.470313] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:10:25.404 11:56:12 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@41 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:26.343 11:56:13 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@42 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4421 00:10:27.724 11:56:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@44 -- # waitforserial SPDKISFASTANDAWESOME 00:10:27.724 11:56:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1198 -- # local i=0 00:10:27.724 11:56:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:27.724 11:56:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:10:27.724 11:56:14 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1205 -- # sleep 2 00:10:29.632 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:29.632 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:29.632 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:29.632 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:10:29.632 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:29.632 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1208 -- # return 0 00:10:29.632 11:56:16 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:29.632 [global] 00:10:29.632 thread=1 00:10:29.632 invalidate=1 00:10:29.632 rw=write 00:10:29.632 time_based=1 00:10:29.632 runtime=1 00:10:29.632 ioengine=libaio 00:10:29.632 direct=1 00:10:29.632 bs=4096 00:10:29.632 iodepth=1 00:10:29.632 norandommap=0 00:10:29.632 numjobs=1 00:10:29.632 00:10:29.632 verify_dump=1 00:10:29.632 verify_backlog=512 00:10:29.632 verify_state_save=0 00:10:29.632 do_verify=1 00:10:29.632 verify=crc32c-intel 00:10:29.632 [job0] 00:10:29.632 filename=/dev/nvme0n1 00:10:29.632 Could not set queue depth (nvme0n1) 00:10:29.892 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:29.892 fio-3.35 00:10:29.892 Starting 1 thread 00:10:31.273 00:10:31.273 job0: (groupid=0, jobs=1): err= 0: pid=228935: Thu Jul 25 11:56:18 2024 00:10:31.273 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:31.273 slat (nsec): min=7116, max=35878, avg=8097.54, stdev=1430.46 00:10:31.273 clat (usec): min=346, max=861, avg=466.71, stdev=73.31 00:10:31.273 lat (usec): min=353, max=869, avg=474.81, stdev=73.32 00:10:31.273 clat percentiles (usec): 00:10:31.273 | 1.00th=[ 359], 5.00th=[ 371], 10.00th=[ 420], 20.00th=[ 433], 00:10:31.273 | 30.00th=[ 437], 40.00th=[ 441], 50.00th=[ 449], 60.00th=[ 461], 00:10:31.273 | 70.00th=[ 469], 80.00th=[ 498], 90.00th=[ 515], 95.00th=[ 635], 00:10:31.273 | 99.00th=[ 775], 99.50th=[ 783], 99.90th=[ 848], 99.95th=[ 865], 00:10:31.273 | 99.99th=[ 865] 00:10:31.273 write: IOPS=1469, BW=5878KiB/s (6019kB/s)(5884KiB/1001msec); 0 zone resets 00:10:31.273 slat (usec): min=10, max=26865, avg=30.62, stdev=700.14 00:10:31.273 clat (usec): min=234, max=1431, avg=313.51, stdev=132.33 00:10:31.273 lat (usec): min=246, max=27625, avg=344.12, stdev=724.19 00:10:31.273 clat percentiles (usec): 00:10:31.273 | 1.00th=[ 237], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 243], 00:10:31.273 | 30.00th=[ 245], 40.00th=[ 249], 50.00th=[ 255], 60.00th=[ 277], 00:10:31.273 | 70.00th=[ 293], 80.00th=[ 343], 90.00th=[ 478], 95.00th=[ 644], 00:10:31.273 | 99.00th=[ 848], 99.50th=[ 898], 99.90th=[ 1303], 99.95th=[ 1434], 00:10:31.273 | 99.99th=[ 1434] 00:10:31.273 bw ( KiB/s): min= 4528, max= 4528, per=77.03%, avg=4528.00, stdev= 0.00, samples=1 00:10:31.273 iops : min= 1132, max= 1132, avg=1132.00, stdev= 0.00, samples=1 00:10:31.273 lat (usec) : 250=24.77%, 500=62.36%, 750=11.14%, 1000=1.52% 00:10:31.273 lat (msec) : 2=0.20% 00:10:31.273 cpu : usr=1.60%, sys=4.70%, ctx=2498, majf=0, minf=2 00:10:31.273 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:31.273 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.273 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:31.273 issued rwts: total=1024,1471,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:31.273 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:31.273 00:10:31.273 Run status group 0 (all jobs): 00:10:31.273 READ: bw=4092KiB/s (4190kB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=4096KiB (4194kB), run=1001-1001msec 00:10:31.273 WRITE: bw=5878KiB/s (6019kB/s), 5878KiB/s-5878KiB/s (6019kB/s-6019kB/s), io=5884KiB (6025kB), run=1001-1001msec 00:10:31.273 00:10:31.273 Disk stats (read/write): 00:10:31.273 nvme0n1: ios=1049/1179, merge=0/0, ticks=1444/356, in_queue=1800, util=98.90% 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@48 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:31.273 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 2 controller(s) 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@49 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1219 -- # local i=0 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1231 -- # return 0 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- target/nmic.sh@53 -- # nvmftestfini 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@117 -- # sync 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@120 -- # set +e 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:31.273 rmmod nvme_tcp 00:10:31.273 rmmod nvme_fabrics 00:10:31.273 rmmod nvme_keyring 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@124 -- # set -e 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@125 -- # return 0 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@489 -- # '[' -n 227919 ']' 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@490 -- # killprocess 227919 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@948 -- # '[' -z 227919 ']' 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@952 -- # kill -0 227919 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # uname 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 227919 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@966 -- # echo 'killing process with pid 227919' 00:10:31.273 killing process with pid 227919 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@967 -- # kill 227919 00:10:31.273 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@972 -- # wait 227919 00:10:31.534 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:31.534 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:31.534 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:31.534 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:31.534 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:31.534 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:31.534 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:31.534 11:56:18 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.512 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:10:33.512 00:10:33.512 real 0m14.972s 00:10:33.512 user 0m34.836s 00:10:33.512 sys 0m5.005s 00:10:33.512 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@1124 -- # xtrace_disable 00:10:33.512 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_nmic -- common/autotest_common.sh@10 -- # set +x 00:10:33.512 ************************************ 00:10:33.512 END TEST nvmf_nmic 00:10:33.512 ************************************ 00:10:33.512 11:56:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:10:33.512 11:56:20 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@34 -- # run_test nvmf_fio_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:33.512 11:56:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:10:33.512 11:56:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:10:33.512 11:56:20 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:10:33.773 ************************************ 00:10:33.773 START TEST nvmf_fio_target 00:10:33.773 ************************************ 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fio.sh --transport=tcp 00:10:33.773 * Looking for test storage... 00:10:33.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # uname -s 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.773 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@5 -- # export PATH 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@47 -- # : 0 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@16 -- # nvmftestinit 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@285 -- # xtrace_disable 00:10:33.774 11:56:20 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.073 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:10:39.073 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # pci_devs=() 00:10:39.073 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:10:39.073 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:10:39.073 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # net_devs=() 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # e810=() 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@296 -- # local -ga e810 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # x722=() 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@297 -- # local -ga x722 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # mlx=() 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@298 -- # local -ga mlx 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:10:39.074 Found 0000:86:00.0 (0x8086 - 0x159b) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:10:39.074 Found 0000:86:00.1 (0x8086 - 0x159b) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:10:39.074 Found net devices under 0000:86:00.0: cvl_0_0 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:10:39.074 Found net devices under 0000:86:00.1: cvl_0_1 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@414 -- # is_hw=yes 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:10:39.074 11:56:25 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:10:39.074 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:10:39.074 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:10:39.074 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:10:39.074 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:10:39.074 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:10:39.074 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:10:39.074 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:10:39.074 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:10:39.074 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.202 ms 00:10:39.074 00:10:39.074 --- 10.0.0.2 ping statistics --- 00:10:39.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.074 rtt min/avg/max/mdev = 0.202/0.202/0.202/0.000 ms 00:10:39.074 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:10:39.074 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:10:39.074 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.431 ms 00:10:39.074 00:10:39.074 --- 10.0.0.1 ping statistics --- 00:10:39.074 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:10:39.074 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:10:39.074 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:10:39.074 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@422 -- # return 0 00:10:39.074 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:10:39.074 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:10:39.074 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:10:39.075 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:10:39.075 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:10:39.075 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:10:39.075 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:10:39.075 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@17 -- # nvmfappstart -m 0xF 00:10:39.075 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:10:39.075 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:10:39.075 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.075 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@481 -- # nvmfpid=232674 00:10:39.075 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@482 -- # waitforlisten 232674 00:10:39.075 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:10:39.075 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@829 -- # '[' -z 232674 ']' 00:10:39.075 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:39.075 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:10:39.075 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:39.075 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:39.075 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:10:39.075 11:56:26 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.075 [2024-07-25 11:56:26.243677] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:10:39.075 [2024-07-25 11:56:26.243719] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:10:39.075 EAL: No free 2048 kB hugepages reported on node 1 00:10:39.075 [2024-07-25 11:56:26.300324] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:10:39.335 [2024-07-25 11:56:26.383831] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:10:39.335 [2024-07-25 11:56:26.383866] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:10:39.335 [2024-07-25 11:56:26.383873] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:10:39.335 [2024-07-25 11:56:26.383884] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:10:39.335 [2024-07-25 11:56:26.383889] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:10:39.335 [2024-07-25 11:56:26.383927] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:10:39.335 [2024-07-25 11:56:26.383943] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:10:39.335 [2024-07-25 11:56:26.384031] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:10:39.335 [2024-07-25 11:56:26.384032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:10:39.904 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:10:39.904 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@862 -- # return 0 00:10:39.904 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:10:39.905 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:10:39.905 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:10:39.905 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:10:39.905 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:10:40.164 [2024-07-25 11:56:27.275078] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:10:40.164 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:40.425 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@21 -- # malloc_bdevs='Malloc0 ' 00:10:40.425 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:40.685 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@22 -- # malloc_bdevs+=Malloc1 00:10:40.685 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:40.685 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@24 -- # raid_malloc_bdevs='Malloc2 ' 00:10:40.685 11:56:27 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:40.945 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@25 -- # raid_malloc_bdevs+=Malloc3 00:10:40.945 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n raid0 -z 64 -r 0 -b 'Malloc2 Malloc3' 00:10:41.205 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:41.465 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@29 -- # concat_malloc_bdevs='Malloc4 ' 00:10:41.465 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:41.465 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@30 -- # concat_malloc_bdevs+='Malloc5 ' 00:10:41.465 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:10:41.724 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@31 -- # concat_malloc_bdevs+=Malloc6 00:10:41.724 11:56:28 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_create -n concat0 -r concat -z 64 -b 'Malloc4 Malloc5 Malloc6' 00:10:41.985 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:10:41.985 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:41.985 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:10:42.245 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@35 -- # for malloc_bdev in $malloc_bdevs 00:10:42.245 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:10:42.504 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:10:42.764 [2024-07-25 11:56:29.760838] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:10:42.764 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 raid0 00:10:42.764 11:56:29 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 concat0 00:10:43.024 11:56:30 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@46 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:10:44.404 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@48 -- # waitforserial SPDKISFASTANDAWESOME 4 00:10:44.404 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1198 -- # local i=0 00:10:44.404 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:10:44.404 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1200 -- # [[ -n 4 ]] 00:10:44.404 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1201 -- # nvme_device_counter=4 00:10:44.404 11:56:31 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1205 -- # sleep 2 00:10:46.312 11:56:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:10:46.312 11:56:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:10:46.312 11:56:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:10:46.312 11:56:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1207 -- # nvme_devices=4 00:10:46.312 11:56:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:10:46.312 11:56:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1208 -- # return 0 00:10:46.312 11:56:33 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t write -r 1 -v 00:10:46.312 [global] 00:10:46.312 thread=1 00:10:46.312 invalidate=1 00:10:46.312 rw=write 00:10:46.312 time_based=1 00:10:46.312 runtime=1 00:10:46.312 ioengine=libaio 00:10:46.312 direct=1 00:10:46.312 bs=4096 00:10:46.312 iodepth=1 00:10:46.312 norandommap=0 00:10:46.312 numjobs=1 00:10:46.312 00:10:46.312 verify_dump=1 00:10:46.312 verify_backlog=512 00:10:46.313 verify_state_save=0 00:10:46.313 do_verify=1 00:10:46.313 verify=crc32c-intel 00:10:46.313 [job0] 00:10:46.313 filename=/dev/nvme0n1 00:10:46.313 [job1] 00:10:46.313 filename=/dev/nvme0n2 00:10:46.313 [job2] 00:10:46.313 filename=/dev/nvme0n3 00:10:46.313 [job3] 00:10:46.313 filename=/dev/nvme0n4 00:10:46.313 Could not set queue depth (nvme0n1) 00:10:46.313 Could not set queue depth (nvme0n2) 00:10:46.313 Could not set queue depth (nvme0n3) 00:10:46.313 Could not set queue depth (nvme0n4) 00:10:46.606 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.606 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.606 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.606 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:46.606 fio-3.35 00:10:46.606 Starting 4 threads 00:10:47.984 00:10:47.984 job0: (groupid=0, jobs=1): err= 0: pid=234040: Thu Jul 25 11:56:34 2024 00:10:47.984 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:47.984 slat (nsec): min=7222, max=23800, avg=8455.97, stdev=1589.32 00:10:47.984 clat (usec): min=363, max=795, avg=547.39, stdev=51.51 00:10:47.984 lat (usec): min=371, max=804, avg=555.85, stdev=51.45 00:10:47.984 clat percentiles (usec): 00:10:47.984 | 1.00th=[ 392], 5.00th=[ 490], 10.00th=[ 502], 20.00th=[ 519], 00:10:47.984 | 30.00th=[ 529], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 553], 00:10:47.984 | 70.00th=[ 562], 80.00th=[ 578], 90.00th=[ 586], 95.00th=[ 627], 00:10:47.984 | 99.00th=[ 742], 99.50th=[ 775], 99.90th=[ 791], 99.95th=[ 799], 00:10:47.984 | 99.99th=[ 799] 00:10:47.984 write: IOPS=1218, BW=4875KiB/s (4992kB/s)(4880KiB/1001msec); 0 zone resets 00:10:47.984 slat (nsec): min=8906, max=57397, avg=12243.29, stdev=3844.97 00:10:47.984 clat (usec): min=233, max=1046, avg=335.13, stdev=106.10 00:10:47.984 lat (usec): min=244, max=1078, avg=347.37, stdev=106.67 00:10:47.984 clat percentiles (usec): 00:10:47.984 | 1.00th=[ 235], 5.00th=[ 239], 10.00th=[ 243], 20.00th=[ 253], 00:10:47.984 | 30.00th=[ 269], 40.00th=[ 289], 50.00th=[ 306], 60.00th=[ 322], 00:10:47.984 | 70.00th=[ 351], 80.00th=[ 396], 90.00th=[ 461], 95.00th=[ 578], 00:10:47.984 | 99.00th=[ 676], 99.50th=[ 725], 99.90th=[ 922], 99.95th=[ 1045], 00:10:47.984 | 99.99th=[ 1045] 00:10:47.984 bw ( KiB/s): min= 4520, max= 4520, per=34.56%, avg=4520.00, stdev= 0.00, samples=1 00:10:47.984 iops : min= 1130, max= 1130, avg=1130.00, stdev= 0.00, samples=1 00:10:47.984 lat (usec) : 250=9.98%, 500=43.49%, 750=45.81%, 1000=0.67% 00:10:47.984 lat (msec) : 2=0.04% 00:10:47.984 cpu : usr=2.30%, sys=3.20%, ctx=2244, majf=0, minf=1 00:10:47.984 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.984 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.984 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.984 issued rwts: total=1024,1220,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.984 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.984 job1: (groupid=0, jobs=1): err= 0: pid=234047: Thu Jul 25 11:56:34 2024 00:10:47.984 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:47.985 slat (nsec): min=2219, max=38649, avg=7285.05, stdev=2211.29 00:10:47.985 clat (usec): min=338, max=1117, avg=573.77, stdev=101.36 00:10:47.985 lat (usec): min=342, max=1143, avg=581.05, stdev=102.04 00:10:47.985 clat percentiles (usec): 00:10:47.985 | 1.00th=[ 371], 5.00th=[ 482], 10.00th=[ 506], 20.00th=[ 529], 00:10:47.985 | 30.00th=[ 537], 40.00th=[ 545], 50.00th=[ 553], 60.00th=[ 562], 00:10:47.985 | 70.00th=[ 570], 80.00th=[ 586], 90.00th=[ 676], 95.00th=[ 775], 00:10:47.985 | 99.00th=[ 1057], 99.50th=[ 1074], 99.90th=[ 1090], 99.95th=[ 1123], 00:10:47.985 | 99.99th=[ 1123] 00:10:47.985 write: IOPS=1132, BW=4531KiB/s (4640kB/s)(4536KiB/1001msec); 0 zone resets 00:10:47.985 slat (nsec): min=3486, max=39800, avg=11170.99, stdev=3047.59 00:10:47.985 clat (usec): min=221, max=1711, avg=340.52, stdev=126.88 00:10:47.985 lat (usec): min=237, max=1724, avg=351.69, stdev=127.33 00:10:47.985 clat percentiles (usec): 00:10:47.985 | 1.00th=[ 235], 5.00th=[ 239], 10.00th=[ 241], 20.00th=[ 249], 00:10:47.985 | 30.00th=[ 281], 40.00th=[ 289], 50.00th=[ 297], 60.00th=[ 310], 00:10:47.985 | 70.00th=[ 351], 80.00th=[ 404], 90.00th=[ 486], 95.00th=[ 635], 00:10:47.985 | 99.00th=[ 734], 99.50th=[ 955], 99.90th=[ 1319], 99.95th=[ 1713], 00:10:47.985 | 99.99th=[ 1713] 00:10:47.985 bw ( KiB/s): min= 4152, max= 4152, per=31.74%, avg=4152.00, stdev= 0.00, samples=1 00:10:47.985 iops : min= 1038, max= 1038, avg=1038.00, stdev= 0.00, samples=1 00:10:47.985 lat (usec) : 250=10.94%, 500=40.59%, 750=44.62%, 1000=2.87% 00:10:47.985 lat (msec) : 2=0.97% 00:10:47.985 cpu : usr=2.00%, sys=3.00%, ctx=2158, majf=0, minf=1 00:10:47.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.985 issued rwts: total=1024,1134,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.985 job2: (groupid=0, jobs=1): err= 0: pid=234067: Thu Jul 25 11:56:34 2024 00:10:47.985 read: IOPS=18, BW=75.8KiB/s (77.7kB/s)(76.0KiB/1002msec) 00:10:47.985 slat (nsec): min=4966, max=21893, avg=18806.89, stdev=5614.25 00:10:47.985 clat (usec): min=449, max=42522, avg=39658.21, stdev=9504.06 00:10:47.985 lat (usec): min=456, max=42544, avg=39677.02, stdev=9507.25 00:10:47.985 clat percentiles (usec): 00:10:47.985 | 1.00th=[ 449], 5.00th=[ 449], 10.00th=[40633], 20.00th=[41157], 00:10:47.985 | 30.00th=[41681], 40.00th=[41681], 50.00th=[42206], 60.00th=[42206], 00:10:47.985 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42730], 00:10:47.985 | 99.00th=[42730], 99.50th=[42730], 99.90th=[42730], 99.95th=[42730], 00:10:47.985 | 99.99th=[42730] 00:10:47.985 write: IOPS=510, BW=2044KiB/s (2093kB/s)(2048KiB/1002msec); 0 zone resets 00:10:47.985 slat (usec): min=6, max=40951, avg=147.00, stdev=2240.19 00:10:47.985 clat (usec): min=222, max=1031, avg=334.55, stdev=104.31 00:10:47.985 lat (usec): min=229, max=41658, avg=481.55, stdev=2274.02 00:10:47.985 clat percentiles (usec): 00:10:47.985 | 1.00th=[ 227], 5.00th=[ 255], 10.00th=[ 260], 20.00th=[ 265], 00:10:47.985 | 30.00th=[ 273], 40.00th=[ 281], 50.00th=[ 302], 60.00th=[ 322], 00:10:47.985 | 70.00th=[ 334], 80.00th=[ 396], 90.00th=[ 453], 95.00th=[ 611], 00:10:47.985 | 99.00th=[ 676], 99.50th=[ 701], 99.90th=[ 1029], 99.95th=[ 1029], 00:10:47.985 | 99.99th=[ 1029] 00:10:47.985 bw ( KiB/s): min= 4096, max= 4096, per=31.31%, avg=4096.00, stdev= 0.00, samples=1 00:10:47.985 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:47.985 lat (usec) : 250=3.95%, 500=86.06%, 750=6.40% 00:10:47.985 lat (msec) : 2=0.19%, 50=3.39% 00:10:47.985 cpu : usr=0.40%, sys=0.40%, ctx=534, majf=0, minf=1 00:10:47.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.985 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.985 job3: (groupid=0, jobs=1): err= 0: pid=234073: Thu Jul 25 11:56:34 2024 00:10:47.985 read: IOPS=18, BW=73.6KiB/s (75.3kB/s)(76.0KiB/1033msec) 00:10:47.985 slat (nsec): min=9598, max=23981, avg=19778.79, stdev=5494.13 00:10:47.985 clat (usec): min=40847, max=42067, avg=41875.55, stdev=334.31 00:10:47.985 lat (usec): min=40857, max=42091, avg=41895.33, stdev=336.03 00:10:47.985 clat percentiles (usec): 00:10:47.985 | 1.00th=[40633], 5.00th=[40633], 10.00th=[41157], 20.00th=[41681], 00:10:47.985 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:47.985 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:47.985 | 99.00th=[42206], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:47.985 | 99.99th=[42206] 00:10:47.985 write: IOPS=495, BW=1983KiB/s (2030kB/s)(2048KiB/1033msec); 0 zone resets 00:10:47.985 slat (usec): min=11, max=41103, avg=149.38, stdev=2219.55 00:10:47.985 clat (usec): min=243, max=879, avg=309.26, stdev=97.56 00:10:47.985 lat (usec): min=255, max=41818, avg=458.64, stdev=2249.59 00:10:47.985 clat percentiles (usec): 00:10:47.985 | 1.00th=[ 247], 5.00th=[ 249], 10.00th=[ 253], 20.00th=[ 258], 00:10:47.985 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 269], 60.00th=[ 281], 00:10:47.985 | 70.00th=[ 293], 80.00th=[ 330], 90.00th=[ 412], 95.00th=[ 519], 00:10:47.985 | 99.00th=[ 660], 99.50th=[ 717], 99.90th=[ 881], 99.95th=[ 881], 00:10:47.985 | 99.99th=[ 881] 00:10:47.985 bw ( KiB/s): min= 4096, max= 4096, per=31.31%, avg=4096.00, stdev= 0.00, samples=1 00:10:47.985 iops : min= 1024, max= 1024, avg=1024.00, stdev= 0.00, samples=1 00:10:47.985 lat (usec) : 250=6.40%, 500=82.30%, 750=7.34%, 1000=0.38% 00:10:47.985 lat (msec) : 50=3.58% 00:10:47.985 cpu : usr=0.29%, sys=1.07%, ctx=534, majf=0, minf=2 00:10:47.985 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:47.985 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.985 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:47.985 issued rwts: total=19,512,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:47.985 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:47.985 00:10:47.985 Run status group 0 (all jobs): 00:10:47.985 READ: bw=8077KiB/s (8271kB/s), 73.6KiB/s-4092KiB/s (75.3kB/s-4190kB/s), io=8344KiB (8544kB), run=1001-1033msec 00:10:47.985 WRITE: bw=12.8MiB/s (13.4MB/s), 1983KiB/s-4875KiB/s (2030kB/s-4992kB/s), io=13.2MiB (13.8MB), run=1001-1033msec 00:10:47.985 00:10:47.985 Disk stats (read/write): 00:10:47.985 nvme0n1: ios=925/1024, merge=0/0, ticks=571/331, in_queue=902, util=89.58% 00:10:47.985 nvme0n2: ios=888/1024, merge=0/0, ticks=599/341, in_queue=940, util=91.67% 00:10:47.985 nvme0n3: ios=38/512, merge=0/0, ticks=1509/170, in_queue=1679, util=95.31% 00:10:47.985 nvme0n4: ios=74/512, merge=0/0, ticks=897/152, in_queue=1049, util=99.79% 00:10:47.985 11:56:34 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t randwrite -r 1 -v 00:10:47.985 [global] 00:10:47.985 thread=1 00:10:47.985 invalidate=1 00:10:47.985 rw=randwrite 00:10:47.985 time_based=1 00:10:47.985 runtime=1 00:10:47.985 ioengine=libaio 00:10:47.985 direct=1 00:10:47.985 bs=4096 00:10:47.985 iodepth=1 00:10:47.985 norandommap=0 00:10:47.985 numjobs=1 00:10:47.985 00:10:47.985 verify_dump=1 00:10:47.985 verify_backlog=512 00:10:47.985 verify_state_save=0 00:10:47.985 do_verify=1 00:10:47.985 verify=crc32c-intel 00:10:47.985 [job0] 00:10:47.985 filename=/dev/nvme0n1 00:10:47.985 [job1] 00:10:47.985 filename=/dev/nvme0n2 00:10:47.985 [job2] 00:10:47.985 filename=/dev/nvme0n3 00:10:47.985 [job3] 00:10:47.985 filename=/dev/nvme0n4 00:10:47.985 Could not set queue depth (nvme0n1) 00:10:47.985 Could not set queue depth (nvme0n2) 00:10:47.985 Could not set queue depth (nvme0n3) 00:10:47.985 Could not set queue depth (nvme0n4) 00:10:48.244 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.244 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.244 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.244 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:48.244 fio-3.35 00:10:48.244 Starting 4 threads 00:10:49.622 00:10:49.622 job0: (groupid=0, jobs=1): err= 0: pid=234526: Thu Jul 25 11:56:36 2024 00:10:49.622 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:49.622 slat (nsec): min=7109, max=40351, avg=8129.43, stdev=1816.74 00:10:49.622 clat (usec): min=459, max=921, avg=561.73, stdev=31.35 00:10:49.622 lat (usec): min=467, max=929, avg=569.86, stdev=31.33 00:10:49.622 clat percentiles (usec): 00:10:49.622 | 1.00th=[ 469], 5.00th=[ 515], 10.00th=[ 537], 20.00th=[ 553], 00:10:49.622 | 30.00th=[ 553], 40.00th=[ 562], 50.00th=[ 562], 60.00th=[ 570], 00:10:49.622 | 70.00th=[ 570], 80.00th=[ 578], 90.00th=[ 586], 95.00th=[ 586], 00:10:49.622 | 99.00th=[ 611], 99.50th=[ 701], 99.90th=[ 865], 99.95th=[ 922], 00:10:49.622 | 99.99th=[ 922] 00:10:49.622 write: IOPS=1232, BW=4931KiB/s (5049kB/s)(4936KiB/1001msec); 0 zone resets 00:10:49.622 slat (nsec): min=10119, max=41779, avg=12155.90, stdev=2943.77 00:10:49.622 clat (usec): min=242, max=3436, avg=319.66, stdev=162.39 00:10:49.622 lat (usec): min=252, max=3456, avg=331.81, stdev=163.65 00:10:49.622 clat percentiles (usec): 00:10:49.622 | 1.00th=[ 245], 5.00th=[ 249], 10.00th=[ 251], 20.00th=[ 255], 00:10:49.622 | 30.00th=[ 260], 40.00th=[ 265], 50.00th=[ 277], 60.00th=[ 289], 00:10:49.622 | 70.00th=[ 302], 80.00th=[ 330], 90.00th=[ 388], 95.00th=[ 537], 00:10:49.622 | 99.00th=[ 930], 99.50th=[ 1287], 99.90th=[ 1418], 99.95th=[ 3425], 00:10:49.622 | 99.99th=[ 3425] 00:10:49.622 bw ( KiB/s): min= 4232, max= 4232, per=19.44%, avg=4232.00, stdev= 0.00, samples=1 00:10:49.622 iops : min= 1058, max= 1058, avg=1058.00, stdev= 0.00, samples=1 00:10:49.622 lat (usec) : 250=3.90%, 500=49.29%, 750=44.82%, 1000=1.51% 00:10:49.622 lat (msec) : 2=0.44%, 4=0.04% 00:10:49.622 cpu : usr=1.90%, sys=3.90%, ctx=2258, majf=0, minf=1 00:10:49.622 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.622 issued rwts: total=1024,1234,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.622 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.622 job1: (groupid=0, jobs=1): err= 0: pid=234550: Thu Jul 25 11:56:36 2024 00:10:49.622 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:49.622 slat (nsec): min=7227, max=37618, avg=8065.33, stdev=1171.47 00:10:49.622 clat (usec): min=361, max=791, avg=523.87, stdev=45.88 00:10:49.622 lat (usec): min=369, max=799, avg=531.94, stdev=45.90 00:10:49.622 clat percentiles (usec): 00:10:49.622 | 1.00th=[ 375], 5.00th=[ 404], 10.00th=[ 445], 20.00th=[ 519], 00:10:49.622 | 30.00th=[ 529], 40.00th=[ 537], 50.00th=[ 537], 60.00th=[ 545], 00:10:49.622 | 70.00th=[ 545], 80.00th=[ 553], 90.00th=[ 562], 95.00th=[ 562], 00:10:49.622 | 99.00th=[ 578], 99.50th=[ 586], 99.90th=[ 742], 99.95th=[ 791], 00:10:49.622 | 99.99th=[ 791] 00:10:49.622 write: IOPS=1376, BW=5506KiB/s (5639kB/s)(5512KiB/1001msec); 0 zone resets 00:10:49.622 slat (nsec): min=10089, max=45985, avg=11521.66, stdev=1766.73 00:10:49.622 clat (usec): min=235, max=904, avg=313.51, stdev=91.32 00:10:49.622 lat (usec): min=247, max=943, avg=325.03, stdev=91.78 00:10:49.622 clat percentiles (usec): 00:10:49.622 | 1.00th=[ 239], 5.00th=[ 243], 10.00th=[ 245], 20.00th=[ 249], 00:10:49.622 | 30.00th=[ 253], 40.00th=[ 260], 50.00th=[ 273], 60.00th=[ 302], 00:10:49.622 | 70.00th=[ 338], 80.00th=[ 379], 90.00th=[ 408], 95.00th=[ 510], 00:10:49.622 | 99.00th=[ 652], 99.50th=[ 660], 99.90th=[ 816], 99.95th=[ 906], 00:10:49.622 | 99.99th=[ 906] 00:10:49.622 bw ( KiB/s): min= 5352, max= 5352, per=24.58%, avg=5352.00, stdev= 0.00, samples=1 00:10:49.622 iops : min= 1338, max= 1338, avg=1338.00, stdev= 0.00, samples=1 00:10:49.622 lat (usec) : 250=13.70%, 500=46.92%, 750=39.22%, 1000=0.17% 00:10:49.622 cpu : usr=2.10%, sys=3.90%, ctx=2402, majf=0, minf=1 00:10:49.622 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.622 issued rwts: total=1024,1378,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.622 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.622 job2: (groupid=0, jobs=1): err= 0: pid=234582: Thu Jul 25 11:56:36 2024 00:10:49.622 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:49.622 slat (nsec): min=6445, max=31854, avg=7293.00, stdev=979.26 00:10:49.622 clat (usec): min=460, max=1091, avg=577.31, stdev=50.23 00:10:49.622 lat (usec): min=467, max=1098, avg=584.60, stdev=50.23 00:10:49.622 clat percentiles (usec): 00:10:49.622 | 1.00th=[ 486], 5.00th=[ 510], 10.00th=[ 537], 20.00th=[ 553], 00:10:49.622 | 30.00th=[ 562], 40.00th=[ 570], 50.00th=[ 578], 60.00th=[ 578], 00:10:49.622 | 70.00th=[ 586], 80.00th=[ 594], 90.00th=[ 611], 95.00th=[ 619], 00:10:49.622 | 99.00th=[ 807], 99.50th=[ 938], 99.90th=[ 1045], 99.95th=[ 1090], 00:10:49.622 | 99.99th=[ 1090] 00:10:49.622 write: IOPS=1358, BW=5435KiB/s (5565kB/s)(5440KiB/1001msec); 0 zone resets 00:10:49.622 slat (nsec): min=9258, max=37251, avg=10657.11, stdev=2450.56 00:10:49.622 clat (usec): min=235, max=952, avg=280.46, stdev=81.70 00:10:49.622 lat (usec): min=246, max=989, avg=291.11, stdev=83.17 00:10:49.622 clat percentiles (usec): 00:10:49.622 | 1.00th=[ 239], 5.00th=[ 241], 10.00th=[ 243], 20.00th=[ 247], 00:10:49.622 | 30.00th=[ 249], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 262], 00:10:49.622 | 70.00th=[ 269], 80.00th=[ 281], 90.00th=[ 318], 95.00th=[ 404], 00:10:49.622 | 99.00th=[ 701], 99.50th=[ 750], 99.90th=[ 857], 99.95th=[ 955], 00:10:49.622 | 99.99th=[ 955] 00:10:49.622 bw ( KiB/s): min= 5040, max= 5040, per=23.15%, avg=5040.00, stdev= 0.00, samples=1 00:10:49.622 iops : min= 1260, max= 1260, avg=1260.00, stdev= 0.00, samples=1 00:10:49.622 lat (usec) : 250=19.04%, 500=37.84%, 750=42.28%, 1000=0.76% 00:10:49.622 lat (msec) : 2=0.08% 00:10:49.622 cpu : usr=1.50%, sys=2.00%, ctx=2386, majf=0, minf=1 00:10:49.622 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.622 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.622 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.622 issued rwts: total=1024,1360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.622 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.622 job3: (groupid=0, jobs=1): err= 0: pid=234598: Thu Jul 25 11:56:36 2024 00:10:49.622 read: IOPS=1022, BW=4092KiB/s (4190kB/s)(4096KiB/1001msec) 00:10:49.622 slat (nsec): min=7359, max=38145, avg=8250.21, stdev=1106.53 00:10:49.622 clat (usec): min=364, max=855, avg=529.55, stdev=46.98 00:10:49.622 lat (usec): min=372, max=864, avg=537.80, stdev=47.00 00:10:49.622 clat percentiles (usec): 00:10:49.622 | 1.00th=[ 383], 5.00th=[ 408], 10.00th=[ 453], 20.00th=[ 523], 00:10:49.622 | 30.00th=[ 537], 40.00th=[ 537], 50.00th=[ 545], 60.00th=[ 545], 00:10:49.622 | 70.00th=[ 553], 80.00th=[ 553], 90.00th=[ 562], 95.00th=[ 570], 00:10:49.622 | 99.00th=[ 578], 99.50th=[ 603], 99.90th=[ 775], 99.95th=[ 857], 00:10:49.622 | 99.99th=[ 857] 00:10:49.623 write: IOPS=1475, BW=5902KiB/s (6044kB/s)(5908KiB/1001msec); 0 zone resets 00:10:49.623 slat (nsec): min=10431, max=74046, avg=11786.91, stdev=2276.48 00:10:49.623 clat (usec): min=238, max=839, avg=287.12, stdev=65.45 00:10:49.623 lat (usec): min=248, max=913, avg=298.91, stdev=66.08 00:10:49.623 clat percentiles (usec): 00:10:49.623 | 1.00th=[ 245], 5.00th=[ 247], 10.00th=[ 249], 20.00th=[ 253], 00:10:49.623 | 30.00th=[ 255], 40.00th=[ 260], 50.00th=[ 265], 60.00th=[ 269], 00:10:49.623 | 70.00th=[ 281], 80.00th=[ 297], 90.00th=[ 351], 95.00th=[ 408], 00:10:49.623 | 99.00th=[ 652], 99.50th=[ 652], 99.90th=[ 676], 99.95th=[ 840], 00:10:49.623 | 99.99th=[ 840] 00:10:49.623 bw ( KiB/s): min= 5760, max= 5760, per=26.45%, avg=5760.00, stdev= 0.00, samples=1 00:10:49.623 iops : min= 1440, max= 1440, avg=1440.00, stdev= 0.00, samples=1 00:10:49.623 lat (usec) : 250=7.24%, 500=56.50%, 750=36.15%, 1000=0.12% 00:10:49.623 cpu : usr=2.90%, sys=3.40%, ctx=2502, majf=0, minf=2 00:10:49.623 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:49.623 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.623 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:49.623 issued rwts: total=1024,1477,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:49.623 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:49.623 00:10:49.623 Run status group 0 (all jobs): 00:10:49.623 READ: bw=16.0MiB/s (16.8MB/s), 4092KiB/s-4092KiB/s (4190kB/s-4190kB/s), io=16.0MiB (16.8MB), run=1001-1001msec 00:10:49.623 WRITE: bw=21.3MiB/s (22.3MB/s), 4931KiB/s-5902KiB/s (5049kB/s-6044kB/s), io=21.3MiB (22.3MB), run=1001-1001msec 00:10:49.623 00:10:49.623 Disk stats (read/write): 00:10:49.623 nvme0n1: ios=857/1024, merge=0/0, ticks=682/323, in_queue=1005, util=88.98% 00:10:49.623 nvme0n2: ios=965/1024, merge=0/0, ticks=781/292, in_queue=1073, util=93.22% 00:10:49.623 nvme0n3: ios=909/1024, merge=0/0, ticks=1064/285, in_queue=1349, util=95.89% 00:10:49.623 nvme0n4: ios=978/1024, merge=0/0, ticks=549/282, in_queue=831, util=91.86% 00:10:49.623 11:56:36 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t write -r 1 -v 00:10:49.623 [global] 00:10:49.623 thread=1 00:10:49.623 invalidate=1 00:10:49.623 rw=write 00:10:49.623 time_based=1 00:10:49.623 runtime=1 00:10:49.623 ioengine=libaio 00:10:49.623 direct=1 00:10:49.623 bs=4096 00:10:49.623 iodepth=128 00:10:49.623 norandommap=0 00:10:49.623 numjobs=1 00:10:49.623 00:10:49.623 verify_dump=1 00:10:49.623 verify_backlog=512 00:10:49.623 verify_state_save=0 00:10:49.623 do_verify=1 00:10:49.623 verify=crc32c-intel 00:10:49.623 [job0] 00:10:49.623 filename=/dev/nvme0n1 00:10:49.623 [job1] 00:10:49.623 filename=/dev/nvme0n2 00:10:49.623 [job2] 00:10:49.623 filename=/dev/nvme0n3 00:10:49.623 [job3] 00:10:49.623 filename=/dev/nvme0n4 00:10:49.623 Could not set queue depth (nvme0n1) 00:10:49.623 Could not set queue depth (nvme0n2) 00:10:49.623 Could not set queue depth (nvme0n3) 00:10:49.623 Could not set queue depth (nvme0n4) 00:10:49.623 job0: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.623 job1: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.623 job2: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.623 job3: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:49.623 fio-3.35 00:10:49.623 Starting 4 threads 00:10:51.003 00:10:51.003 job0: (groupid=0, jobs=1): err= 0: pid=234987: Thu Jul 25 11:56:38 2024 00:10:51.003 read: IOPS=4043, BW=15.8MiB/s (16.6MB/s)(16.0MiB/1013msec) 00:10:51.003 slat (nsec): min=1105, max=11365k, avg=103401.00, stdev=711406.26 00:10:51.003 clat (usec): min=4851, max=34181, avg=14238.49, stdev=4695.25 00:10:51.003 lat (usec): min=4858, max=34187, avg=14341.89, stdev=4734.44 00:10:51.003 clat percentiles (usec): 00:10:51.003 | 1.00th=[ 5800], 5.00th=[ 8717], 10.00th=[ 9634], 20.00th=[10421], 00:10:51.003 | 30.00th=[11207], 40.00th=[12125], 50.00th=[12911], 60.00th=[14484], 00:10:51.003 | 70.00th=[15664], 80.00th=[17957], 90.00th=[20579], 95.00th=[23462], 00:10:51.003 | 99.00th=[30016], 99.50th=[31327], 99.90th=[34341], 99.95th=[34341], 00:10:51.003 | 99.99th=[34341] 00:10:51.003 write: IOPS=4441, BW=17.3MiB/s (18.2MB/s)(17.6MiB/1013msec); 0 zone resets 00:10:51.003 slat (nsec): min=1963, max=29441k, avg=118548.36, stdev=779093.42 00:10:51.003 clat (usec): min=1497, max=43744, avg=14656.79, stdev=6615.76 00:10:51.003 lat (usec): min=1504, max=43786, avg=14775.34, stdev=6653.39 00:10:51.003 clat percentiles (usec): 00:10:51.003 | 1.00th=[ 5014], 5.00th=[ 6980], 10.00th=[ 8455], 20.00th=[ 9634], 00:10:51.003 | 30.00th=[10814], 40.00th=[11731], 50.00th=[12911], 60.00th=[14615], 00:10:51.003 | 70.00th=[16450], 80.00th=[18482], 90.00th=[23462], 95.00th=[26870], 00:10:51.003 | 99.00th=[40109], 99.50th=[41157], 99.90th=[43779], 99.95th=[43779], 00:10:51.003 | 99.99th=[43779] 00:10:51.003 bw ( KiB/s): min=15136, max=19832, per=26.38%, avg=17484.00, stdev=3320.57, samples=2 00:10:51.003 iops : min= 3784, max= 4958, avg=4371.00, stdev=830.14, samples=2 00:10:51.003 lat (msec) : 2=0.15%, 4=0.22%, 10=20.07%, 20=66.75%, 50=12.81% 00:10:51.003 cpu : usr=3.16%, sys=3.46%, ctx=575, majf=0, minf=1 00:10:51.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:51.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:51.003 issued rwts: total=4096,4499,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.003 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:51.003 job1: (groupid=0, jobs=1): err= 0: pid=234990: Thu Jul 25 11:56:38 2024 00:10:51.003 read: IOPS=3912, BW=15.3MiB/s (16.0MB/s)(15.4MiB/1007msec) 00:10:51.003 slat (nsec): min=1030, max=21062k, avg=114458.07, stdev=746546.67 00:10:51.003 clat (usec): min=1672, max=59523, avg=15739.42, stdev=8116.72 00:10:51.003 lat (usec): min=6136, max=59550, avg=15853.88, stdev=8155.10 00:10:51.003 clat percentiles (usec): 00:10:51.003 | 1.00th=[ 6390], 5.00th=[ 8225], 10.00th=[ 9241], 20.00th=[10028], 00:10:51.003 | 30.00th=[11731], 40.00th=[13042], 50.00th=[13960], 60.00th=[14877], 00:10:51.003 | 70.00th=[16909], 80.00th=[19530], 90.00th=[22938], 95.00th=[25822], 00:10:51.003 | 99.00th=[56886], 99.50th=[58459], 99.90th=[59507], 99.95th=[59507], 00:10:51.003 | 99.99th=[59507] 00:10:51.003 write: IOPS=4067, BW=15.9MiB/s (16.7MB/s)(16.0MiB/1007msec); 0 zone resets 00:10:51.003 slat (nsec): min=1877, max=8596.0k, avg=123920.48, stdev=615244.69 00:10:51.003 clat (usec): min=2034, max=59471, avg=15333.03, stdev=7983.81 00:10:51.003 lat (usec): min=2042, max=59475, avg=15456.95, stdev=8026.91 00:10:51.003 clat percentiles (usec): 00:10:51.003 | 1.00th=[ 4948], 5.00th=[ 6652], 10.00th=[ 8291], 20.00th=[ 9765], 00:10:51.003 | 30.00th=[11338], 40.00th=[12649], 50.00th=[13435], 60.00th=[14877], 00:10:51.003 | 70.00th=[16450], 80.00th=[18744], 90.00th=[24511], 95.00th=[33162], 00:10:51.003 | 99.00th=[45351], 99.50th=[45876], 99.90th=[47973], 99.95th=[47973], 00:10:51.003 | 99.99th=[59507] 00:10:51.003 bw ( KiB/s): min=14112, max=18656, per=24.72%, avg=16384.00, stdev=3213.09, samples=2 00:10:51.003 iops : min= 3528, max= 4664, avg=4096.00, stdev=803.27, samples=2 00:10:51.003 lat (msec) : 2=0.01%, 4=0.45%, 10=20.71%, 20=62.29%, 50=15.46% 00:10:51.003 lat (msec) : 100=1.08% 00:10:51.003 cpu : usr=1.99%, sys=2.88%, ctx=595, majf=0, minf=1 00:10:51.003 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:51.003 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.003 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:51.003 issued rwts: total=3940,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.003 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:51.003 job2: (groupid=0, jobs=1): err= 0: pid=234991: Thu Jul 25 11:56:38 2024 00:10:51.003 read: IOPS=3658, BW=14.3MiB/s (15.0MB/s)(14.3MiB/1003msec) 00:10:51.003 slat (nsec): min=1049, max=35712k, avg=132737.27, stdev=1061440.04 00:10:51.003 clat (usec): min=904, max=65809, avg=16130.57, stdev=9607.77 00:10:51.003 lat (usec): min=4319, max=65818, avg=16263.31, stdev=9679.79 00:10:51.003 clat percentiles (usec): 00:10:51.003 | 1.00th=[ 5145], 5.00th=[ 7832], 10.00th=[ 8717], 20.00th=[10028], 00:10:51.003 | 30.00th=[11338], 40.00th=[12911], 50.00th=[13829], 60.00th=[14877], 00:10:51.003 | 70.00th=[16057], 80.00th=[17957], 90.00th=[23987], 95.00th=[42206], 00:10:51.003 | 99.00th=[54789], 99.50th=[65799], 99.90th=[65799], 99.95th=[65799], 00:10:51.003 | 99.99th=[65799] 00:10:51.003 write: IOPS=4083, BW=16.0MiB/s (16.7MB/s)(16.0MiB/1003msec); 0 zone resets 00:10:51.003 slat (nsec): min=1966, max=15408k, avg=116929.39, stdev=681756.08 00:10:51.003 clat (usec): min=2045, max=75758, avg=16592.31, stdev=9379.54 00:10:51.003 lat (usec): min=2058, max=75766, avg=16709.24, stdev=9414.90 00:10:51.003 clat percentiles (usec): 00:10:51.003 | 1.00th=[ 4555], 5.00th=[ 7177], 10.00th=[ 7832], 20.00th=[10421], 00:10:51.003 | 30.00th=[11731], 40.00th=[12649], 50.00th=[14222], 60.00th=[15664], 00:10:51.003 | 70.00th=[17957], 80.00th=[23200], 90.00th=[27657], 95.00th=[32375], 00:10:51.003 | 99.00th=[68682], 99.50th=[69731], 99.90th=[76022], 99.95th=[76022], 00:10:51.003 | 99.99th=[76022] 00:10:51.004 bw ( KiB/s): min=16040, max=16384, per=24.46%, avg=16212.00, stdev=243.24, samples=2 00:10:51.004 iops : min= 4010, max= 4096, avg=4053.00, stdev=60.81, samples=2 00:10:51.004 lat (usec) : 1000=0.01% 00:10:51.004 lat (msec) : 4=0.18%, 10=18.70%, 20=61.89%, 50=18.09%, 100=1.12% 00:10:51.004 cpu : usr=2.00%, sys=2.99%, ctx=609, majf=0, minf=1 00:10:51.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:51.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:51.004 issued rwts: total=3669,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:51.004 job3: (groupid=0, jobs=1): err= 0: pid=234992: Thu Jul 25 11:56:38 2024 00:10:51.004 read: IOPS=3866, BW=15.1MiB/s (15.8MB/s)(15.1MiB/1001msec) 00:10:51.004 slat (nsec): min=1245, max=12972k, avg=118234.47, stdev=759494.34 00:10:51.004 clat (usec): min=636, max=55683, avg=14045.18, stdev=6447.72 00:10:51.004 lat (usec): min=5825, max=55705, avg=14163.41, stdev=6502.42 00:10:51.004 clat percentiles (usec): 00:10:51.004 | 1.00th=[ 7111], 5.00th=[ 7963], 10.00th=[ 8717], 20.00th=[10028], 00:10:51.004 | 30.00th=[10814], 40.00th=[11469], 50.00th=[12256], 60.00th=[13042], 00:10:51.004 | 70.00th=[14615], 80.00th=[16450], 90.00th=[21365], 95.00th=[25560], 00:10:51.004 | 99.00th=[48497], 99.50th=[48497], 99.90th=[53216], 99.95th=[53216], 00:10:51.004 | 99.99th=[55837] 00:10:51.004 write: IOPS=4091, BW=16.0MiB/s (16.8MB/s)(16.0MiB/1001msec); 0 zone resets 00:10:51.004 slat (usec): min=2, max=10374, avg=115.38, stdev=575.48 00:10:51.004 clat (usec): min=1460, max=53340, avg=17677.52, stdev=8681.87 00:10:51.004 lat (usec): min=1472, max=53346, avg=17792.90, stdev=8710.80 00:10:51.004 clat percentiles (usec): 00:10:51.004 | 1.00th=[ 4490], 5.00th=[ 6521], 10.00th=[ 8094], 20.00th=[10552], 00:10:51.004 | 30.00th=[12256], 40.00th=[13435], 50.00th=[16188], 60.00th=[18744], 00:10:51.004 | 70.00th=[21890], 80.00th=[23462], 90.00th=[28967], 95.00th=[35914], 00:10:51.004 | 99.00th=[43779], 99.50th=[47973], 99.90th=[47973], 99.95th=[47973], 00:10:51.004 | 99.99th=[53216] 00:10:51.004 bw ( KiB/s): min=17128, max=17128, per=25.84%, avg=17128.00, stdev= 0.00, samples=1 00:10:51.004 iops : min= 4282, max= 4282, avg=4282.00, stdev= 0.00, samples=1 00:10:51.004 lat (usec) : 750=0.01% 00:10:51.004 lat (msec) : 2=0.04%, 4=0.21%, 10=18.05%, 20=57.02%, 50=24.49% 00:10:51.004 lat (msec) : 100=0.18% 00:10:51.004 cpu : usr=1.70%, sys=3.50%, ctx=675, majf=0, minf=1 00:10:51.004 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.2% 00:10:51.004 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:51.004 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:51.004 issued rwts: total=3870,4096,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:51.004 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:51.004 00:10:51.004 Run status group 0 (all jobs): 00:10:51.004 READ: bw=60.1MiB/s (63.0MB/s), 14.3MiB/s-15.8MiB/s (15.0MB/s-16.6MB/s), io=60.8MiB (63.8MB), run=1001-1013msec 00:10:51.004 WRITE: bw=64.7MiB/s (67.9MB/s), 15.9MiB/s-17.3MiB/s (16.7MB/s-18.2MB/s), io=65.6MiB (68.8MB), run=1001-1013msec 00:10:51.004 00:10:51.004 Disk stats (read/write): 00:10:51.004 nvme0n1: ios=3096/3264, merge=0/0, ticks=43620/46446, in_queue=90066, util=98.20% 00:10:51.004 nvme0n2: ios=3093/3197, merge=0/0, ticks=30095/32502, in_queue=62597, util=97.23% 00:10:51.004 nvme0n3: ios=3240/3584, merge=0/0, ticks=35243/36466, in_queue=71709, util=86.50% 00:10:51.004 nvme0n4: ios=3092/3556, merge=0/0, ticks=38799/48363, in_queue=87162, util=97.80% 00:10:51.004 11:56:38 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 128 -t randwrite -r 1 -v 00:10:51.004 [global] 00:10:51.004 thread=1 00:10:51.004 invalidate=1 00:10:51.004 rw=randwrite 00:10:51.004 time_based=1 00:10:51.004 runtime=1 00:10:51.004 ioengine=libaio 00:10:51.004 direct=1 00:10:51.004 bs=4096 00:10:51.004 iodepth=128 00:10:51.004 norandommap=0 00:10:51.004 numjobs=1 00:10:51.004 00:10:51.004 verify_dump=1 00:10:51.004 verify_backlog=512 00:10:51.004 verify_state_save=0 00:10:51.004 do_verify=1 00:10:51.004 verify=crc32c-intel 00:10:51.004 [job0] 00:10:51.004 filename=/dev/nvme0n1 00:10:51.004 [job1] 00:10:51.004 filename=/dev/nvme0n2 00:10:51.004 [job2] 00:10:51.004 filename=/dev/nvme0n3 00:10:51.004 [job3] 00:10:51.004 filename=/dev/nvme0n4 00:10:51.004 Could not set queue depth (nvme0n1) 00:10:51.004 Could not set queue depth (nvme0n2) 00:10:51.004 Could not set queue depth (nvme0n3) 00:10:51.004 Could not set queue depth (nvme0n4) 00:10:51.263 job0: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.263 job1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.263 job2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.263 job3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:10:51.263 fio-3.35 00:10:51.263 Starting 4 threads 00:10:52.643 00:10:52.643 job0: (groupid=0, jobs=1): err= 0: pid=235368: Thu Jul 25 11:56:39 2024 00:10:52.643 read: IOPS=4007, BW=15.7MiB/s (16.4MB/s)(16.0MiB/1022msec) 00:10:52.643 slat (nsec): min=1049, max=30352k, avg=114745.23, stdev=888785.96 00:10:52.643 clat (usec): min=1646, max=53807, avg=16017.50, stdev=6674.32 00:10:52.643 lat (usec): min=1664, max=53816, avg=16132.24, stdev=6713.00 00:10:52.643 clat percentiles (usec): 00:10:52.643 | 1.00th=[ 3064], 5.00th=[ 8848], 10.00th=[ 9503], 20.00th=[10814], 00:10:52.643 | 30.00th=[12256], 40.00th=[13435], 50.00th=[14746], 60.00th=[16188], 00:10:52.643 | 70.00th=[17957], 80.00th=[20055], 90.00th=[23725], 95.00th=[26870], 00:10:52.643 | 99.00th=[40633], 99.50th=[42206], 99.90th=[42206], 99.95th=[42206], 00:10:52.643 | 99.99th=[53740] 00:10:52.643 write: IOPS=4238, BW=16.6MiB/s (17.4MB/s)(16.9MiB/1022msec); 0 zone resets 00:10:52.643 slat (nsec): min=1973, max=11596k, avg=114945.41, stdev=634843.58 00:10:52.643 clat (usec): min=3765, max=39309, avg=14779.39, stdev=4947.57 00:10:52.643 lat (usec): min=3778, max=39313, avg=14894.33, stdev=4949.35 00:10:52.643 clat percentiles (usec): 00:10:52.643 | 1.00th=[ 5538], 5.00th=[ 7701], 10.00th=[ 9765], 20.00th=[11338], 00:10:52.643 | 30.00th=[12387], 40.00th=[13566], 50.00th=[14877], 60.00th=[15664], 00:10:52.643 | 70.00th=[16188], 80.00th=[16909], 90.00th=[19268], 95.00th=[21890], 00:10:52.643 | 99.00th=[35914], 99.50th=[37487], 99.90th=[39060], 99.95th=[39060], 00:10:52.643 | 99.99th=[39060] 00:10:52.643 bw ( KiB/s): min=16392, max=17248, per=27.83%, avg=16820.00, stdev=605.28, samples=2 00:10:52.643 iops : min= 4098, max= 4312, avg=4205.00, stdev=151.32, samples=2 00:10:52.643 lat (msec) : 2=0.20%, 4=0.75%, 10=10.81%, 20=74.45%, 50=13.78% 00:10:52.643 lat (msec) : 100=0.01% 00:10:52.643 cpu : usr=2.35%, sys=3.72%, ctx=671, majf=0, minf=1 00:10:52.643 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:52.643 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.643 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.643 issued rwts: total=4096,4332,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.643 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.643 job1: (groupid=0, jobs=1): err= 0: pid=235369: Thu Jul 25 11:56:39 2024 00:10:52.643 read: IOPS=4233, BW=16.5MiB/s (17.3MB/s)(16.6MiB/1003msec) 00:10:52.643 slat (nsec): min=1079, max=14637k, avg=88291.11, stdev=691480.90 00:10:52.643 clat (usec): min=2598, max=56158, avg=14495.42, stdev=5569.08 00:10:52.643 lat (usec): min=2604, max=56161, avg=14583.71, stdev=5590.45 00:10:52.643 clat percentiles (usec): 00:10:52.643 | 1.00th=[ 4146], 5.00th=[ 8225], 10.00th=[ 9503], 20.00th=[10683], 00:10:52.643 | 30.00th=[11338], 40.00th=[11863], 50.00th=[12518], 60.00th=[14484], 00:10:52.643 | 70.00th=[15795], 80.00th=[18482], 90.00th=[20579], 95.00th=[26346], 00:10:52.643 | 99.00th=[33424], 99.50th=[33424], 99.90th=[33817], 99.95th=[55313], 00:10:52.643 | 99.99th=[56361] 00:10:52.643 write: IOPS=4594, BW=17.9MiB/s (18.8MB/s)(18.0MiB/1003msec); 0 zone resets 00:10:52.643 slat (nsec): min=1955, max=16332k, avg=104008.69, stdev=709674.27 00:10:52.643 clat (usec): min=749, max=33788, avg=14280.46, stdev=5306.39 00:10:52.643 lat (usec): min=886, max=33798, avg=14384.46, stdev=5332.40 00:10:52.644 clat percentiles (usec): 00:10:52.644 | 1.00th=[ 3851], 5.00th=[ 6718], 10.00th=[ 7963], 20.00th=[10159], 00:10:52.644 | 30.00th=[11207], 40.00th=[12256], 50.00th=[13698], 60.00th=[15139], 00:10:52.644 | 70.00th=[16712], 80.00th=[18482], 90.00th=[21103], 95.00th=[24249], 00:10:52.644 | 99.00th=[30802], 99.50th=[31065], 99.90th=[31065], 99.95th=[31065], 00:10:52.644 | 99.99th=[33817] 00:10:52.644 bw ( KiB/s): min=16384, max=20480, per=30.50%, avg=18432.00, stdev=2896.31, samples=2 00:10:52.644 iops : min= 4096, max= 5120, avg=4608.00, stdev=724.08, samples=2 00:10:52.644 lat (usec) : 750=0.01% 00:10:52.644 lat (msec) : 2=0.06%, 4=0.81%, 10=17.83%, 20=66.59%, 50=14.66% 00:10:52.644 lat (msec) : 100=0.03% 00:10:52.644 cpu : usr=2.89%, sys=3.79%, ctx=497, majf=0, minf=1 00:10:52.644 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:52.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.644 issued rwts: total=4246,4608,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.644 job2: (groupid=0, jobs=1): err= 0: pid=235370: Thu Jul 25 11:56:39 2024 00:10:52.644 read: IOPS=1942, BW=7769KiB/s (7956kB/s)(7808KiB/1005msec) 00:10:52.644 slat (nsec): min=1255, max=35035k, avg=267604.25, stdev=1896116.04 00:10:52.644 clat (usec): min=3558, max=95299, avg=33986.33, stdev=20783.56 00:10:52.644 lat (usec): min=7293, max=95305, avg=34253.93, stdev=20916.45 00:10:52.644 clat percentiles (usec): 00:10:52.644 | 1.00th=[ 7767], 5.00th=[11600], 10.00th=[13435], 20.00th=[17695], 00:10:52.644 | 30.00th=[19792], 40.00th=[23725], 50.00th=[28967], 60.00th=[32113], 00:10:52.644 | 70.00th=[41681], 80.00th=[47449], 90.00th=[72877], 95.00th=[81265], 00:10:52.644 | 99.00th=[90702], 99.50th=[90702], 99.90th=[94897], 99.95th=[94897], 00:10:52.644 | 99.99th=[94897] 00:10:52.644 write: IOPS=2037, BW=8151KiB/s (8347kB/s)(8192KiB/1005msec); 0 zone resets 00:10:52.644 slat (usec): min=2, max=20144, avg=222.98, stdev=1453.37 00:10:52.644 clat (usec): min=8914, max=66836, avg=29203.76, stdev=10692.29 00:10:52.644 lat (usec): min=8925, max=66851, avg=29426.74, stdev=10790.62 00:10:52.644 clat percentiles (usec): 00:10:52.644 | 1.00th=[11731], 5.00th=[15401], 10.00th=[17433], 20.00th=[21103], 00:10:52.644 | 30.00th=[23725], 40.00th=[24511], 50.00th=[26084], 60.00th=[28181], 00:10:52.644 | 70.00th=[32375], 80.00th=[37487], 90.00th=[47449], 95.00th=[51643], 00:10:52.644 | 99.00th=[61080], 99.50th=[64226], 99.90th=[65274], 99.95th=[66847], 00:10:52.644 | 99.99th=[66847] 00:10:52.644 bw ( KiB/s): min= 8192, max= 8192, per=13.55%, avg=8192.00, stdev= 0.00, samples=2 00:10:52.644 iops : min= 2048, max= 2048, avg=2048.00, stdev= 0.00, samples=2 00:10:52.644 lat (msec) : 4=0.03%, 10=1.08%, 20=23.90%, 50=63.10%, 100=11.90% 00:10:52.644 cpu : usr=1.49%, sys=2.29%, ctx=251, majf=0, minf=1 00:10:52.644 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.8%, >=64=98.4% 00:10:52.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.644 issued rwts: total=1952,2048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.644 job3: (groupid=0, jobs=1): err= 0: pid=235371: Thu Jul 25 11:56:39 2024 00:10:52.644 read: IOPS=4007, BW=15.7MiB/s (16.4MB/s)(16.0MiB/1022msec) 00:10:52.644 slat (nsec): min=1100, max=8854.4k, avg=96861.38, stdev=584337.04 00:10:52.644 clat (usec): min=6486, max=27341, avg=12612.97, stdev=3447.35 00:10:52.644 lat (usec): min=6501, max=31141, avg=12709.83, stdev=3477.43 00:10:52.644 clat percentiles (usec): 00:10:52.644 | 1.00th=[ 6718], 5.00th=[ 8848], 10.00th=[ 9241], 20.00th=[ 9634], 00:10:52.644 | 30.00th=[10290], 40.00th=[11338], 50.00th=[11994], 60.00th=[12649], 00:10:52.644 | 70.00th=[13566], 80.00th=[14877], 90.00th=[17433], 95.00th=[19530], 00:10:52.644 | 99.00th=[23987], 99.50th=[25560], 99.90th=[26608], 99.95th=[26608], 00:10:52.644 | 99.99th=[27395] 00:10:52.644 write: IOPS=4358, BW=17.0MiB/s (17.8MB/s)(17.4MiB/1022msec); 0 zone resets 00:10:52.644 slat (usec): min=2, max=8609, avg=127.28, stdev=514.97 00:10:52.644 clat (usec): min=2401, max=41969, avg=17335.01, stdev=5718.09 00:10:52.644 lat (usec): min=2421, max=41974, avg=17462.30, stdev=5749.15 00:10:52.644 clat percentiles (usec): 00:10:52.644 | 1.00th=[ 6456], 5.00th=[ 8160], 10.00th=[10159], 20.00th=[11469], 00:10:52.644 | 30.00th=[13829], 40.00th=[15664], 50.00th=[17433], 60.00th=[19530], 00:10:52.644 | 70.00th=[21103], 80.00th=[22152], 90.00th=[23462], 95.00th=[25035], 00:10:52.644 | 99.00th=[34341], 99.50th=[37487], 99.90th=[42206], 99.95th=[42206], 00:10:52.644 | 99.99th=[42206] 00:10:52.644 bw ( KiB/s): min=17016, max=17592, per=28.63%, avg=17304.00, stdev=407.29, samples=2 00:10:52.644 iops : min= 4254, max= 4398, avg=4326.00, stdev=101.82, samples=2 00:10:52.644 lat (msec) : 4=0.02%, 10=15.66%, 20=62.41%, 50=21.91% 00:10:52.644 cpu : usr=2.15%, sys=3.23%, ctx=838, majf=0, minf=1 00:10:52.644 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.2%, 32=0.4%, >=64=99.3% 00:10:52.644 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:52.644 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:10:52.644 issued rwts: total=4096,4454,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:52.644 latency : target=0, window=0, percentile=100.00%, depth=128 00:10:52.644 00:10:52.644 Run status group 0 (all jobs): 00:10:52.644 READ: bw=55.0MiB/s (57.7MB/s), 7769KiB/s-16.5MiB/s (7956kB/s-17.3MB/s), io=56.2MiB (58.9MB), run=1003-1022msec 00:10:52.644 WRITE: bw=59.0MiB/s (61.9MB/s), 8151KiB/s-17.9MiB/s (8347kB/s-18.8MB/s), io=60.3MiB (63.2MB), run=1003-1022msec 00:10:52.644 00:10:52.644 Disk stats (read/write): 00:10:52.644 nvme0n1: ios=3544/3591, merge=0/0, ticks=51193/42378, in_queue=93571, util=90.28% 00:10:52.644 nvme0n2: ios=3608/3632, merge=0/0, ticks=53353/48412, in_queue=101765, util=98.48% 00:10:52.644 nvme0n3: ios=1593/1575, merge=0/0, ticks=21891/17610, in_queue=39501, util=93.87% 00:10:52.644 nvme0n4: ios=3545/3601, merge=0/0, ticks=25993/33614, in_queue=59607, util=98.74% 00:10:52.644 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@55 -- # sync 00:10:52.644 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@59 -- # fio_pid=235601 00:10:52.644 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/fio-wrapper -p nvmf -i 4096 -d 1 -t read -r 10 00:10:52.644 11:56:39 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@61 -- # sleep 3 00:10:52.644 [global] 00:10:52.644 thread=1 00:10:52.644 invalidate=1 00:10:52.644 rw=read 00:10:52.644 time_based=1 00:10:52.644 runtime=10 00:10:52.644 ioengine=libaio 00:10:52.644 direct=1 00:10:52.644 bs=4096 00:10:52.644 iodepth=1 00:10:52.644 norandommap=1 00:10:52.644 numjobs=1 00:10:52.644 00:10:52.644 [job0] 00:10:52.644 filename=/dev/nvme0n1 00:10:52.644 [job1] 00:10:52.644 filename=/dev/nvme0n2 00:10:52.644 [job2] 00:10:52.644 filename=/dev/nvme0n3 00:10:52.644 [job3] 00:10:52.644 filename=/dev/nvme0n4 00:10:52.644 Could not set queue depth (nvme0n1) 00:10:52.644 Could not set queue depth (nvme0n2) 00:10:52.644 Could not set queue depth (nvme0n3) 00:10:52.644 Could not set queue depth (nvme0n4) 00:10:52.903 job0: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.903 job1: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.904 job2: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.904 job3: (g=0): rw=read, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=1 00:10:52.904 fio-3.35 00:10:52.904 Starting 4 threads 00:10:55.442 11:56:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete concat0 00:10:55.701 fio: io_u error on file /dev/nvme0n4: Remote I/O error: read offset=757760, buflen=4096 00:10:55.701 fio: pid=235744, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:55.702 11:56:42 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_raid_delete raid0 00:10:55.962 fio: io_u error on file /dev/nvme0n3: Remote I/O error: read offset=786432, buflen=4096 00:10:55.962 fio: pid=235740, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:55.962 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:55.962 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc0 00:10:56.277 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:56.277 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc1 00:10:56.277 fio: io_u error on file /dev/nvme0n1: Remote I/O error: read offset=303104, buflen=4096 00:10:56.277 fio: pid=235737, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:56.277 fio: io_u error on file /dev/nvme0n2: Remote I/O error: read offset=24203264, buflen=4096 00:10:56.277 fio: pid=235738, err=121/file:io_u.c:1889, func=io_u error, error=Remote I/O error 00:10:56.277 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:56.277 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc2 00:10:56.277 00:10:56.277 job0: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=235737: Thu Jul 25 11:56:43 2024 00:10:56.277 read: IOPS=24, BW=96.2KiB/s (98.5kB/s)(296KiB/3077msec) 00:10:56.277 slat (usec): min=12, max=7520, avg=121.83, stdev=865.93 00:10:56.277 clat (usec): min=1264, max=43783, avg=41439.80, stdev=4739.72 00:10:56.277 lat (usec): min=1304, max=49038, avg=41562.95, stdev=4818.71 00:10:56.277 clat percentiles (usec): 00:10:56.277 | 1.00th=[ 1270], 5.00th=[41681], 10.00th=[41681], 20.00th=[41681], 00:10:56.277 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42206], 00:10:56.277 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:56.277 | 99.00th=[43779], 99.50th=[43779], 99.90th=[43779], 99.95th=[43779], 00:10:56.277 | 99.99th=[43779] 00:10:56.277 bw ( KiB/s): min= 95, max= 96, per=1.21%, avg=95.80, stdev= 0.45, samples=5 00:10:56.277 iops : min= 23, max= 24, avg=23.80, stdev= 0.45, samples=5 00:10:56.277 lat (msec) : 2=1.33%, 50=97.33% 00:10:56.277 cpu : usr=0.13%, sys=0.00%, ctx=76, majf=0, minf=1 00:10:56.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.277 complete : 0=1.3%, 4=98.7%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.277 issued rwts: total=75,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.277 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.277 job1: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=235738: Thu Jul 25 11:56:43 2024 00:10:56.277 read: IOPS=1824, BW=7297KiB/s (7472kB/s)(23.1MiB/3239msec) 00:10:56.277 slat (usec): min=6, max=16794, avg=16.71, stdev=339.03 00:10:56.277 clat (usec): min=413, max=3067, avg=528.79, stdev=90.72 00:10:56.277 lat (usec): min=421, max=17589, avg=545.50, stdev=359.29 00:10:56.277 clat percentiles (usec): 00:10:56.277 | 1.00th=[ 437], 5.00th=[ 461], 10.00th=[ 474], 20.00th=[ 482], 00:10:56.277 | 30.00th=[ 490], 40.00th=[ 494], 50.00th=[ 498], 60.00th=[ 506], 00:10:56.277 | 70.00th=[ 523], 80.00th=[ 553], 90.00th=[ 652], 95.00th=[ 742], 00:10:56.277 | 99.00th=[ 775], 99.50th=[ 783], 99.90th=[ 971], 99.95th=[ 1795], 00:10:56.277 | 99.99th=[ 3064] 00:10:56.277 bw ( KiB/s): min= 6872, max= 8000, per=93.63%, avg=7354.33, stdev=431.89, samples=6 00:10:56.277 iops : min= 1718, max= 2000, avg=1838.50, stdev=108.01, samples=6 00:10:56.277 lat (usec) : 500=50.90%, 750=44.82%, 1000=4.20% 00:10:56.277 lat (msec) : 2=0.03%, 4=0.03% 00:10:56.277 cpu : usr=1.05%, sys=3.06%, ctx=5916, majf=0, minf=1 00:10:56.277 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.277 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.277 complete : 0=0.1%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.277 issued rwts: total=5910,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.278 job2: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=235740: Thu Jul 25 11:56:43 2024 00:10:56.278 read: IOPS=67, BW=269KiB/s (275kB/s)(768KiB/2859msec) 00:10:56.278 slat (usec): min=6, max=4691, avg=37.43, stdev=336.81 00:10:56.278 clat (usec): min=467, max=43059, avg=14844.47, stdev=19640.84 00:10:56.278 lat (usec): min=475, max=46886, avg=14881.98, stdev=19683.56 00:10:56.278 clat percentiles (usec): 00:10:56.278 | 1.00th=[ 486], 5.00th=[ 510], 10.00th=[ 523], 20.00th=[ 537], 00:10:56.278 | 30.00th=[ 553], 40.00th=[ 603], 50.00th=[ 635], 60.00th=[ 1074], 00:10:56.278 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:56.278 | 99.00th=[42206], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:56.278 | 99.99th=[43254] 00:10:56.278 bw ( KiB/s): min= 96, max= 1080, per=3.72%, avg=292.80, stdev=440.06, samples=5 00:10:56.278 iops : min= 24, max= 270, avg=73.20, stdev=110.01, samples=5 00:10:56.278 lat (usec) : 500=1.55%, 750=49.74%, 1000=4.66% 00:10:56.278 lat (msec) : 2=9.33%, 50=34.20% 00:10:56.278 cpu : usr=0.00%, sys=0.17%, ctx=194, majf=0, minf=1 00:10:56.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.278 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.278 issued rwts: total=193,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.278 job3: (groupid=0, jobs=1): err=121 (file:io_u.c:1889, func=io_u error, error=Remote I/O error): pid=235744: Thu Jul 25 11:56:43 2024 00:10:56.278 read: IOPS=69, BW=275KiB/s (281kB/s)(740KiB/2692msec) 00:10:56.278 slat (nsec): min=6979, max=30852, avg=13118.12, stdev=7230.25 00:10:56.278 clat (usec): min=492, max=42999, avg=14519.58, stdev=19586.78 00:10:56.278 lat (usec): min=500, max=43023, avg=14532.64, stdev=19593.45 00:10:56.278 clat percentiles (usec): 00:10:56.278 | 1.00th=[ 494], 5.00th=[ 506], 10.00th=[ 515], 20.00th=[ 537], 00:10:56.278 | 30.00th=[ 553], 40.00th=[ 586], 50.00th=[ 644], 60.00th=[ 1057], 00:10:56.278 | 70.00th=[41681], 80.00th=[42206], 90.00th=[42206], 95.00th=[42206], 00:10:56.278 | 99.00th=[42730], 99.50th=[43254], 99.90th=[43254], 99.95th=[43254], 00:10:56.278 | 99.99th=[43254] 00:10:56.278 bw ( KiB/s): min= 96, max= 1048, per=3.67%, avg=288.00, stdev=424.87, samples=5 00:10:56.278 iops : min= 24, max= 262, avg=72.00, stdev=106.22, samples=5 00:10:56.278 lat (usec) : 500=2.69%, 750=51.08%, 1000=4.30% 00:10:56.278 lat (msec) : 2=8.06%, 50=33.33% 00:10:56.278 cpu : usr=0.00%, sys=0.15%, ctx=188, majf=0, minf=2 00:10:56.278 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:10:56.278 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.278 complete : 0=0.5%, 4=99.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:10:56.278 issued rwts: total=186,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:10:56.278 latency : target=0, window=0, percentile=100.00%, depth=1 00:10:56.278 00:10:56.278 Run status group 0 (all jobs): 00:10:56.278 READ: bw=7854KiB/s (8043kB/s), 96.2KiB/s-7297KiB/s (98.5kB/s-7472kB/s), io=24.8MiB (26.1MB), run=2692-3239msec 00:10:56.278 00:10:56.278 Disk stats (read/write): 00:10:56.278 nvme0n1: ios=68/0, merge=0/0, ticks=2816/0, in_queue=2816, util=94.96% 00:10:56.278 nvme0n2: ios=5726/0, merge=0/0, ticks=3921/0, in_queue=3921, util=98.08% 00:10:56.278 nvme0n3: ios=191/0, merge=0/0, ticks=2810/0, in_queue=2810, util=96.38% 00:10:56.278 nvme0n4: ios=227/0, merge=0/0, ticks=3550/0, in_queue=3550, util=99.37% 00:10:56.537 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:56.537 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc3 00:10:56.796 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:56.796 11:56:43 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc4 00:10:56.796 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:56.796 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc5 00:10:57.056 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@65 -- # for malloc_bdev in $malloc_bdevs $raid_malloc_bdevs $concat_malloc_bdevs 00:10:57.056 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@66 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_delete Malloc6 00:10:57.317 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@69 -- # fio_status=0 00:10:57.317 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # wait 235601 00:10:57.317 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@70 -- # fio_status=4 00:10:57.317 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@72 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:10:57.317 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:10:57.317 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@73 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:10:57.317 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1219 -- # local i=0 00:10:57.317 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:10:57.317 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.317 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:10:57.317 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:10:57.317 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1231 -- # return 0 00:10:57.317 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@75 -- # '[' 4 -eq 0 ']' 00:10:57.317 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@80 -- # echo 'nvmf hotplug test: fio failed as expected' 00:10:57.317 nvmf hotplug test: fio failed as expected 00:10:57.317 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@85 -- # rm -f ./local-job0-0-verify.state 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@86 -- # rm -f ./local-job1-1-verify.state 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@87 -- # rm -f ./local-job2-2-verify.state 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@89 -- # trap - SIGINT SIGTERM EXIT 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- target/fio.sh@91 -- # nvmftestfini 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@117 -- # sync 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@120 -- # set +e 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:10:57.578 rmmod nvme_tcp 00:10:57.578 rmmod nvme_fabrics 00:10:57.578 rmmod nvme_keyring 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@124 -- # set -e 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@125 -- # return 0 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@489 -- # '[' -n 232674 ']' 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@490 -- # killprocess 232674 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@948 -- # '[' -z 232674 ']' 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@952 -- # kill -0 232674 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # uname 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 232674 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 232674' 00:10:57.578 killing process with pid 232674 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@967 -- # kill 232674 00:10:57.578 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@972 -- # wait 232674 00:10:57.838 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:10:57.838 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:10:57.838 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:10:57.838 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:10:57.838 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:10:57.838 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:10:57.838 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:10:57.838 11:56:44 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:00.381 00:11:00.381 real 0m26.302s 00:11:00.381 user 1m46.496s 00:11:00.381 sys 0m7.394s 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_fio_target -- common/autotest_common.sh@10 -- # set +x 00:11:00.381 ************************************ 00:11:00.381 END TEST nvmf_fio_target 00:11:00.381 ************************************ 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@35 -- # run_test nvmf_bdevio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:00.381 ************************************ 00:11:00.381 START TEST nvmf_bdevio 00:11:00.381 ************************************ 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp 00:11:00.381 * Looking for test storage... 00:11:00.381 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # uname -s 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@5 -- # export PATH 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@47 -- # : 0 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@14 -- # nvmftestinit 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:00.381 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:00.382 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:00.382 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:00.382 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:00.382 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:00.382 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@285 -- # xtrace_disable 00:11:00.382 11:56:47 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # pci_devs=() 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # net_devs=() 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # e810=() 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@296 -- # local -ga e810 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # x722=() 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@297 -- # local -ga x722 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # mlx=() 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@298 -- # local -ga mlx 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:05.667 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:05.667 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:05.667 Found net devices under 0000:86:00.0: cvl_0_0 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:05.667 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:05.667 Found net devices under 0000:86:00.1: cvl_0_1 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@414 -- # is_hw=yes 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:05.668 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:05.929 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:05.929 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.279 ms 00:11:05.929 00:11:05.929 --- 10.0.0.2 ping statistics --- 00:11:05.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.929 rtt min/avg/max/mdev = 0.279/0.279/0.279/0.000 ms 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:05.929 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:05.929 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.275 ms 00:11:05.929 00:11:05.929 --- 10.0.0.1 ping statistics --- 00:11:05.929 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:05.929 rtt min/avg/max/mdev = 0.275/0.275/0.275/0.000 ms 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@422 -- # return 0 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x78 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@481 -- # nvmfpid=239978 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@482 -- # waitforlisten 239978 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@829 -- # '[' -z 239978 ']' 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:05.929 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:05.929 11:56:52 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:05.929 [2024-07-25 11:56:53.037679] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:11:05.929 [2024-07-25 11:56:53.037723] [ DPDK EAL parameters: nvmf -c 0x78 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:05.929 EAL: No free 2048 kB hugepages reported on node 1 00:11:05.929 [2024-07-25 11:56:53.097606] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:05.929 [2024-07-25 11:56:53.171599] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:05.929 [2024-07-25 11:56:53.171637] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:05.929 [2024-07-25 11:56:53.171644] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:05.929 [2024-07-25 11:56:53.171650] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:05.929 [2024-07-25 11:56:53.171655] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:05.929 [2024-07-25 11:56:53.171769] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:11:05.929 [2024-07-25 11:56:53.171874] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:11:05.929 [2024-07-25 11:56:53.171979] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:05.929 [2024-07-25 11:56:53.171980] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@862 -- # return 0 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:06.868 [2024-07-25 11:56:53.893552] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:06.868 Malloc0 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:06.868 [2024-07-25 11:56:53.944910] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # config=() 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@532 -- # local subsystem config 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:11:06.868 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:11:06.868 { 00:11:06.868 "params": { 00:11:06.868 "name": "Nvme$subsystem", 00:11:06.868 "trtype": "$TEST_TRANSPORT", 00:11:06.868 "traddr": "$NVMF_FIRST_TARGET_IP", 00:11:06.868 "adrfam": "ipv4", 00:11:06.869 "trsvcid": "$NVMF_PORT", 00:11:06.869 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:11:06.869 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:11:06.869 "hdgst": ${hdgst:-false}, 00:11:06.869 "ddgst": ${ddgst:-false} 00:11:06.869 }, 00:11:06.869 "method": "bdev_nvme_attach_controller" 00:11:06.869 } 00:11:06.869 EOF 00:11:06.869 )") 00:11:06.869 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@554 -- # cat 00:11:06.869 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@556 -- # jq . 00:11:06.869 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@557 -- # IFS=, 00:11:06.869 11:56:53 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:11:06.869 "params": { 00:11:06.869 "name": "Nvme1", 00:11:06.869 "trtype": "tcp", 00:11:06.869 "traddr": "10.0.0.2", 00:11:06.869 "adrfam": "ipv4", 00:11:06.869 "trsvcid": "4420", 00:11:06.869 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:11:06.869 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:11:06.869 "hdgst": false, 00:11:06.869 "ddgst": false 00:11:06.869 }, 00:11:06.869 "method": "bdev_nvme_attach_controller" 00:11:06.869 }' 00:11:06.869 [2024-07-25 11:56:53.992548] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:11:06.869 [2024-07-25 11:56:53.992589] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid240230 ] 00:11:06.869 EAL: No free 2048 kB hugepages reported on node 1 00:11:06.869 [2024-07-25 11:56:54.048025] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:11:07.128 [2024-07-25 11:56:54.123751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:07.128 [2024-07-25 11:56:54.123845] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:07.128 [2024-07-25 11:56:54.123847] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:07.128 I/O targets: 00:11:07.128 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:11:07.128 00:11:07.128 00:11:07.128 CUnit - A unit testing framework for C - Version 2.1-3 00:11:07.128 http://cunit.sourceforge.net/ 00:11:07.128 00:11:07.128 00:11:07.128 Suite: bdevio tests on: Nvme1n1 00:11:07.128 Test: blockdev write read block ...passed 00:11:07.387 Test: blockdev write zeroes read block ...passed 00:11:07.387 Test: blockdev write zeroes read no split ...passed 00:11:07.387 Test: blockdev write zeroes read split ...passed 00:11:07.387 Test: blockdev write zeroes read split partial ...passed 00:11:07.387 Test: blockdev reset ...[2024-07-25 11:56:54.523065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:11:07.387 [2024-07-25 11:56:54.523128] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f2b6d0 (9): Bad file descriptor 00:11:07.646 [2024-07-25 11:56:54.673702] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:11:07.646 passed 00:11:07.646 Test: blockdev write read 8 blocks ...passed 00:11:07.646 Test: blockdev write read size > 128k ...passed 00:11:07.646 Test: blockdev write read invalid size ...passed 00:11:07.646 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:11:07.646 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:11:07.646 Test: blockdev write read max offset ...passed 00:11:07.646 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:11:07.646 Test: blockdev writev readv 8 blocks ...passed 00:11:07.646 Test: blockdev writev readv 30 x 1block ...passed 00:11:07.646 Test: blockdev writev readv block ...passed 00:11:07.646 Test: blockdev writev readv size > 128k ...passed 00:11:07.646 Test: blockdev writev readv size > 128k in two iovs ...passed 00:11:07.646 Test: blockdev comparev and writev ...[2024-07-25 11:56:54.868136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.646 [2024-07-25 11:56:54.868165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:11:07.646 [2024-07-25 11:56:54.868179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.646 [2024-07-25 11:56:54.868187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:11:07.646 [2024-07-25 11:56:54.868712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.646 [2024-07-25 11:56:54.868724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:11:07.646 [2024-07-25 11:56:54.868736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.646 [2024-07-25 11:56:54.868744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:11:07.646 [2024-07-25 11:56:54.869264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.646 [2024-07-25 11:56:54.869276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:11:07.646 [2024-07-25 11:56:54.869288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.646 [2024-07-25 11:56:54.869296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:11:07.646 [2024-07-25 11:56:54.869796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.646 [2024-07-25 11:56:54.869807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:11:07.646 [2024-07-25 11:56:54.869823] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:11:07.646 [2024-07-25 11:56:54.869831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:11:07.906 passed 00:11:07.906 Test: blockdev nvme passthru rw ...passed 00:11:07.906 Test: blockdev nvme passthru vendor specific ...[2024-07-25 11:56:54.954011] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:07.906 [2024-07-25 11:56:54.954026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:11:07.906 [2024-07-25 11:56:54.954420] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:07.906 [2024-07-25 11:56:54.954431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:11:07.906 [2024-07-25 11:56:54.954809] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:07.906 [2024-07-25 11:56:54.954820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:11:07.906 [2024-07-25 11:56:54.955203] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:11:07.906 [2024-07-25 11:56:54.955215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:11:07.906 passed 00:11:07.906 Test: blockdev nvme admin passthru ...passed 00:11:07.906 Test: blockdev copy ...passed 00:11:07.906 00:11:07.906 Run Summary: Type Total Ran Passed Failed Inactive 00:11:07.906 suites 1 1 n/a 0 0 00:11:07.906 tests 23 23 23 0 0 00:11:07.906 asserts 152 152 152 0 n/a 00:11:07.906 00:11:07.906 Elapsed time = 1.378 seconds 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- target/bdevio.sh@30 -- # nvmftestfini 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@117 -- # sync 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@120 -- # set +e 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:08.166 rmmod nvme_tcp 00:11:08.166 rmmod nvme_fabrics 00:11:08.166 rmmod nvme_keyring 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@124 -- # set -e 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@125 -- # return 0 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@489 -- # '[' -n 239978 ']' 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@490 -- # killprocess 239978 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@948 -- # '[' -z 239978 ']' 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@952 -- # kill -0 239978 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # uname 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 239978 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@966 -- # echo 'killing process with pid 239978' 00:11:08.166 killing process with pid 239978 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@967 -- # kill 239978 00:11:08.166 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@972 -- # wait 239978 00:11:08.426 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:08.426 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:08.426 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:08.426 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:08.426 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:08.426 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:08.426 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:08.426 11:56:55 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.337 11:56:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:10.337 00:11:10.337 real 0m10.425s 00:11:10.337 user 0m13.133s 00:11:10.337 sys 0m4.913s 00:11:10.337 11:56:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:10.337 11:56:57 nvmf_tcp.nvmf_target_core.nvmf_bdevio -- common/autotest_common.sh@10 -- # set +x 00:11:10.337 ************************************ 00:11:10.337 END TEST nvmf_bdevio 00:11:10.337 ************************************ 00:11:10.337 11:56:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1142 -- # return 0 00:11:10.337 11:56:57 nvmf_tcp.nvmf_target_core -- nvmf/nvmf_target_core.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:11:10.337 00:11:10.337 real 4m33.891s 00:11:10.337 user 10m38.885s 00:11:10.337 sys 1m32.469s 00:11:10.337 11:56:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:10.337 11:56:57 nvmf_tcp.nvmf_target_core -- common/autotest_common.sh@10 -- # set +x 00:11:10.337 ************************************ 00:11:10.337 END TEST nvmf_target_core 00:11:10.337 ************************************ 00:11:10.597 11:56:57 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:11:10.597 11:56:57 nvmf_tcp -- nvmf/nvmf.sh@15 -- # run_test nvmf_target_extra /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:10.597 11:56:57 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:10.597 11:56:57 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.597 11:56:57 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:11:10.597 ************************************ 00:11:10.597 START TEST nvmf_target_extra 00:11:10.597 ************************************ 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_target_extra.sh --transport=tcp 00:11:10.597 * Looking for test storage... 00:11:10.597 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # uname -s 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@5 -- # export PATH 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@47 -- # : 0 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@13 -- # TEST_ARGS=("$@") 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@15 -- # [[ 0 -eq 0 ]] 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@16 -- # run_test nvmf_example /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:10.597 11:56:57 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:10.597 ************************************ 00:11:10.597 START TEST nvmf_example 00:11:10.597 ************************************ 00:11:10.598 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_example.sh --transport=tcp 00:11:10.858 * Looking for test storage... 00:11:10.858 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # uname -s 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@5 -- # export PATH 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@47 -- # : 0 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@11 -- # NVMF_EXAMPLE=("$SPDK_EXAMPLE_DIR/nvmf") 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@13 -- # MALLOC_BDEV_SIZE=64 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@24 -- # build_nvmf_example_args 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@17 -- # '[' 0 -eq 1 ']' 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@20 -- # NVMF_EXAMPLE+=(-i "$NVMF_APP_SHM_ID" -g 10000) 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@21 -- # NVMF_EXAMPLE+=("${NO_HUGE[@]}") 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@40 -- # timing_enter nvmf_example_test 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@41 -- # nvmftestinit 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@285 -- # xtrace_disable 00:11:10.858 11:56:57 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # pci_devs=() 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # net_devs=() 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # e810=() 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@296 -- # local -ga e810 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # x722=() 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@297 -- # local -ga x722 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # mlx=() 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@298 -- # local -ga mlx 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:16.137 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:16.137 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:16.137 Found net devices under 0000:86:00.0: cvl_0_0 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:16.137 Found net devices under 0000:86:00.1: cvl_0_1 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@414 -- # is_hw=yes 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:16.137 11:57:02 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:16.137 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:16.137 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:16.137 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:16.137 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:16.137 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:16.137 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:16.137 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:16.137 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:16.137 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:11:16.137 00:11:16.137 --- 10.0.0.2 ping statistics --- 00:11:16.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.137 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:11:16.137 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:16.137 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:16.137 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.429 ms 00:11:16.137 00:11:16.137 --- 10.0.0.1 ping statistics --- 00:11:16.137 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:16.137 rtt min/avg/max/mdev = 0.429/0.429/0.429/0.000 ms 00:11:16.137 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:16.137 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@422 -- # return 0 00:11:16.137 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:16.137 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:16.137 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@42 -- # nvmfexamplestart '-m 0xF' 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@27 -- # timing_enter start_nvmf_example 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@29 -- # '[' tcp == tcp ']' 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@30 -- # NVMF_EXAMPLE=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_EXAMPLE[@]}") 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@34 -- # nvmfpid=244057 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@35 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/nvmf -i 0 -g 10000 -m 0xF 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@36 -- # waitforlisten 244057 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@829 -- # '[' -z 244057 ']' 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:16.138 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:16.138 11:57:03 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:16.138 EAL: No free 2048 kB hugepages reported on node 1 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@862 -- # return 0 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@37 -- # timing_exit start_nvmf_example 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # rpc_cmd bdev_malloc_create 64 512 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@47 -- # malloc_bdevs='Malloc0 ' 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@49 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@52 -- # for malloc_bdev in $malloc_bdevs 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:11:17.075 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.076 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:17.076 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.076 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:17.076 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:17.076 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:17.076 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:17.076 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@59 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:11:17.076 11:57:04 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:11:17.076 EAL: No free 2048 kB hugepages reported on node 1 00:11:27.098 Initializing NVMe Controllers 00:11:27.098 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:11:27.098 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:11:27.098 Initialization complete. Launching workers. 00:11:27.098 ======================================================== 00:11:27.098 Latency(us) 00:11:27.098 Device Information : IOPS MiB/s Average min max 00:11:27.098 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 13687.00 53.46 4675.99 723.92 16456.80 00:11:27.098 ======================================================== 00:11:27.098 Total : 13687.00 53.46 4675.99 723.92 16456.80 00:11:27.098 00:11:27.358 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@65 -- # trap - SIGINT SIGTERM EXIT 00:11:27.358 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@66 -- # nvmftestfini 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@488 -- # nvmfcleanup 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@117 -- # sync 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@120 -- # set +e 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@121 -- # for i in {1..20} 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:11:27.359 rmmod nvme_tcp 00:11:27.359 rmmod nvme_fabrics 00:11:27.359 rmmod nvme_keyring 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@124 -- # set -e 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@125 -- # return 0 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@489 -- # '[' -n 244057 ']' 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@490 -- # killprocess 244057 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@948 -- # '[' -z 244057 ']' 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@952 -- # kill -0 244057 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # uname 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 244057 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@954 -- # process_name=nvmf 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@958 -- # '[' nvmf = sudo ']' 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@966 -- # echo 'killing process with pid 244057' 00:11:27.359 killing process with pid 244057 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@967 -- # kill 244057 00:11:27.359 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@972 -- # wait 244057 00:11:27.620 nvmf threads initialize successfully 00:11:27.620 bdev subsystem init successfully 00:11:27.620 created a nvmf target service 00:11:27.620 create targets's poll groups done 00:11:27.620 all subsystems of target started 00:11:27.620 nvmf target is running 00:11:27.620 all subsystems of target stopped 00:11:27.620 destroy targets's poll groups done 00:11:27.620 destroyed the nvmf target service 00:11:27.620 bdev subsystem finish successfully 00:11:27.620 nvmf threads destroy successfully 00:11:27.620 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:11:27.620 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:11:27.620 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:11:27.620 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:11:27.620 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@278 -- # remove_spdk_ns 00:11:27.620 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:27.620 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:27.620 11:57:14 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:29.528 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:11:29.528 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- target/nvmf_example.sh@67 -- # timing_exit nvmf_example_test 00:11:29.528 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:29.528 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.528 00:11:29.528 real 0m18.975s 00:11:29.528 user 0m45.741s 00:11:29.528 sys 0m5.263s 00:11:29.528 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:29.528 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_example -- common/autotest_common.sh@10 -- # set +x 00:11:29.528 ************************************ 00:11:29.528 END TEST nvmf_example 00:11:29.528 ************************************ 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@17 -- # run_test nvmf_filesystem /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:11:29.790 ************************************ 00:11:29.790 START TEST nvmf_filesystem 00:11:29.790 ************************************ 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/filesystem.sh --transport=tcp 00:11:29.790 * Looking for test storage... 00:11:29.790 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@34 -- # set -e 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@39 -- # '[' -z /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output ']' 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@44 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh ]] 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/build_config.sh 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@2 -- # CONFIG_ASAN=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@17 -- # CONFIG_PGO_CAPTURE=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@18 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@19 -- # CONFIG_ENV=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@20 -- # CONFIG_LTO=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@21 -- # CONFIG_ISCSI_INITIATOR=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@22 -- # CONFIG_CET=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@23 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@24 -- # CONFIG_OCF_PATH= 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@25 -- # CONFIG_RDMA_SET_TOS=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@26 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@27 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@28 -- # CONFIG_UBLK=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@29 -- # CONFIG_ISAL_CRYPTO=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@30 -- # CONFIG_OPENSSL_PATH= 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@31 -- # CONFIG_OCF=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@32 -- # CONFIG_FUSE=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@33 -- # CONFIG_VTUNE_DIR= 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@34 -- # CONFIG_FUZZER_LIB= 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@35 -- # CONFIG_FUZZER=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@36 -- # CONFIG_DPDK_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@37 -- # CONFIG_CRYPTO=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@38 -- # CONFIG_PGO_USE=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@39 -- # CONFIG_VHOST=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@40 -- # CONFIG_DAOS=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@41 -- # CONFIG_DPDK_INC_DIR= 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@42 -- # CONFIG_DAOS_DIR= 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@43 -- # CONFIG_UNIT_TESTS=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@44 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@45 -- # CONFIG_VIRTIO=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@46 -- # CONFIG_DPDK_UADK=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@47 -- # CONFIG_COVERAGE=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@48 -- # CONFIG_RDMA=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@49 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@50 -- # CONFIG_URING_PATH= 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@51 -- # CONFIG_XNVME=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@52 -- # CONFIG_VFIO_USER=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@53 -- # CONFIG_ARCH=native 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@54 -- # CONFIG_HAVE_EVP_MAC=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@55 -- # CONFIG_URING_ZNS=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@56 -- # CONFIG_WERROR=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@57 -- # CONFIG_HAVE_LIBBSD=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@58 -- # CONFIG_UBSAN=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@59 -- # CONFIG_IPSEC_MB_DIR= 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@60 -- # CONFIG_GOLANG=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@61 -- # CONFIG_ISAL=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@62 -- # CONFIG_IDXD_KERNEL=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@63 -- # CONFIG_DPDK_LIB_DIR= 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@64 -- # CONFIG_RDMA_PROV=verbs 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@65 -- # CONFIG_APPS=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@66 -- # CONFIG_SHARED=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@67 -- # CONFIG_HAVE_KEYUTILS=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@68 -- # CONFIG_FC_PATH= 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@69 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@70 -- # CONFIG_FC=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@71 -- # CONFIG_AVAHI=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@72 -- # CONFIG_FIO_PLUGIN=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@73 -- # CONFIG_RAID5F=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@74 -- # CONFIG_EXAMPLES=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@75 -- # CONFIG_TESTS=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@76 -- # CONFIG_CRYPTO_MLX5=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@77 -- # CONFIG_MAX_LCORES=128 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@78 -- # CONFIG_IPSEC_MB=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@79 -- # CONFIG_PGO_DIR= 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@80 -- # CONFIG_DEBUG=y 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@81 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@82 -- # CONFIG_CROSS_PREFIX= 00:11:29.790 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/build_config.sh@83 -- # CONFIG_URING=n 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@54 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/applications.sh 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@8 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@9 -- # _root=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@10 -- # _app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@11 -- # _test_app_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@12 -- # _examples_dir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@22 -- # [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk/config.h ]] 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:29.791 #define SPDK_CONFIG_H 00:11:29.791 #define SPDK_CONFIG_APPS 1 00:11:29.791 #define SPDK_CONFIG_ARCH native 00:11:29.791 #undef SPDK_CONFIG_ASAN 00:11:29.791 #undef SPDK_CONFIG_AVAHI 00:11:29.791 #undef SPDK_CONFIG_CET 00:11:29.791 #define SPDK_CONFIG_COVERAGE 1 00:11:29.791 #define SPDK_CONFIG_CROSS_PREFIX 00:11:29.791 #undef SPDK_CONFIG_CRYPTO 00:11:29.791 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:29.791 #undef SPDK_CONFIG_CUSTOMOCF 00:11:29.791 #undef SPDK_CONFIG_DAOS 00:11:29.791 #define SPDK_CONFIG_DAOS_DIR 00:11:29.791 #define SPDK_CONFIG_DEBUG 1 00:11:29.791 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:29.791 #define SPDK_CONFIG_DPDK_DIR /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build 00:11:29.791 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:29.791 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:29.791 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:29.791 #undef SPDK_CONFIG_DPDK_UADK 00:11:29.791 #define SPDK_CONFIG_ENV /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/lib/env_dpdk 00:11:29.791 #define SPDK_CONFIG_EXAMPLES 1 00:11:29.791 #undef SPDK_CONFIG_FC 00:11:29.791 #define SPDK_CONFIG_FC_PATH 00:11:29.791 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:29.791 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:29.791 #undef SPDK_CONFIG_FUSE 00:11:29.791 #undef SPDK_CONFIG_FUZZER 00:11:29.791 #define SPDK_CONFIG_FUZZER_LIB 00:11:29.791 #undef SPDK_CONFIG_GOLANG 00:11:29.791 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:29.791 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:29.791 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:29.791 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:29.791 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:29.791 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:29.791 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:29.791 #define SPDK_CONFIG_IDXD 1 00:11:29.791 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:29.791 #undef SPDK_CONFIG_IPSEC_MB 00:11:29.791 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:29.791 #define SPDK_CONFIG_ISAL 1 00:11:29.791 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:29.791 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:29.791 #define SPDK_CONFIG_LIBDIR 00:11:29.791 #undef SPDK_CONFIG_LTO 00:11:29.791 #define SPDK_CONFIG_MAX_LCORES 128 00:11:29.791 #define SPDK_CONFIG_NVME_CUSE 1 00:11:29.791 #undef SPDK_CONFIG_OCF 00:11:29.791 #define SPDK_CONFIG_OCF_PATH 00:11:29.791 #define SPDK_CONFIG_OPENSSL_PATH 00:11:29.791 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:29.791 #define SPDK_CONFIG_PGO_DIR 00:11:29.791 #undef SPDK_CONFIG_PGO_USE 00:11:29.791 #define SPDK_CONFIG_PREFIX /usr/local 00:11:29.791 #undef SPDK_CONFIG_RAID5F 00:11:29.791 #undef SPDK_CONFIG_RBD 00:11:29.791 #define SPDK_CONFIG_RDMA 1 00:11:29.791 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:29.791 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:29.791 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:29.791 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:29.791 #define SPDK_CONFIG_SHARED 1 00:11:29.791 #undef SPDK_CONFIG_SMA 00:11:29.791 #define SPDK_CONFIG_TESTS 1 00:11:29.791 #undef SPDK_CONFIG_TSAN 00:11:29.791 #define SPDK_CONFIG_UBLK 1 00:11:29.791 #define SPDK_CONFIG_UBSAN 1 00:11:29.791 #undef SPDK_CONFIG_UNIT_TESTS 00:11:29.791 #undef SPDK_CONFIG_URING 00:11:29.791 #define SPDK_CONFIG_URING_PATH 00:11:29.791 #undef SPDK_CONFIG_URING_ZNS 00:11:29.791 #undef SPDK_CONFIG_USDT 00:11:29.791 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:29.791 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:29.791 #define SPDK_CONFIG_VFIO_USER 1 00:11:29.791 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:29.791 #define SPDK_CONFIG_VHOST 1 00:11:29.791 #define SPDK_CONFIG_VIRTIO 1 00:11:29.791 #undef SPDK_CONFIG_VTUNE 00:11:29.791 #define SPDK_CONFIG_VTUNE_DIR 00:11:29.791 #define SPDK_CONFIG_WERROR 1 00:11:29.791 #define SPDK_CONFIG_WPDK_DIR 00:11:29.791 #undef SPDK_CONFIG_XNVME 00:11:29.791 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@55 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@56 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # dirname /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/common 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@6 -- # _pmdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # readlink -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/../../../ 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@7 -- # _pmrootdir=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@64 -- # TEST_TAG=N/A 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@65 -- # TEST_TAG_FILE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/.run_test_name 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@67 -- # PM_OUTPUTDIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # uname -s 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@68 -- # PM_OS=Linux 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:29.791 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[0]= 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ............................... != QEMU ]] 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@81 -- # [[ ! -e /.dockerenv ]] 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@84 -- # MONITOR_RESOURCES+=(collect-cpu-temp) 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@85 -- # MONITOR_RESOURCES+=(collect-bmc-pm) 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- pm/common@88 -- # [[ ! -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power ]] 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@58 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@62 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@64 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@66 -- # : 1 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@68 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@70 -- # : 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@72 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@74 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@76 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@78 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@80 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@82 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@84 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@86 -- # : 1 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@88 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@90 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@92 -- # : 1 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@94 -- # : 1 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@96 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@98 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@100 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@102 -- # : tcp 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@104 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@106 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@108 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@110 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@111 -- # export SPDK_TEST_IOAT 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@112 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@113 -- # export SPDK_TEST_BLOBFS 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@114 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@115 -- # export SPDK_TEST_VHOST_INIT 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@116 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@117 -- # export SPDK_TEST_LVOL 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@118 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@119 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@120 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@121 -- # export SPDK_RUN_ASAN 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@122 -- # : 1 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@123 -- # export SPDK_RUN_UBSAN 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@124 -- # : 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@125 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@126 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@127 -- # export SPDK_RUN_NON_ROOT 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@128 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@129 -- # export SPDK_TEST_CRYPTO 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@130 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@131 -- # export SPDK_TEST_FTL 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@132 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@133 -- # export SPDK_TEST_OCF 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@134 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@135 -- # export SPDK_TEST_VMD 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@136 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@137 -- # export SPDK_TEST_OPAL 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@138 -- # : 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@139 -- # export SPDK_TEST_NATIVE_DPDK 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@140 -- # : true 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@141 -- # export SPDK_AUTOTEST_X 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@142 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@143 -- # export SPDK_TEST_RAID5 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@144 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@146 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@148 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@150 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@152 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@154 -- # : e810 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@156 -- # : 0 00:11:29.792 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@158 -- # : 0 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@160 -- # : 0 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@162 -- # : 0 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL_DSA 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@164 -- # : 0 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_IAA 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@167 -- # : 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@168 -- # export SPDK_TEST_FUZZER_TARGET 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@169 -- # : 0 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@170 -- # export SPDK_TEST_NVMF_MDNS 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@171 -- # : 0 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@172 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # export SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@175 -- # SPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # export DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@176 -- # DPDK_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # export VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@177 -- # VFIO_LIB_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # export LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@178 -- # LD_LIBRARY_PATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/build/lib:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/libvfio-user/usr/local/lib 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@181 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # export PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@185 -- # PYTHONPATH=:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/rpc_plugins:/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/python 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@189 -- # PYTHONDONTWRITEBYTECODE=1 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@193 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@194 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@198 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@199 -- # rm -rf /var/tmp/asan_suppression_file 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@200 -- # cat 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@236 -- # echo leak:libfuse3.so 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@238 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@240 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@242 -- # '[' -z /var/spdk/dependencies ']' 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@245 -- # export DEPENDENCY_DIR 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # export SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@249 -- # SPDK_BIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # export SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@250 -- # SPDK_EXAMPLE_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@253 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@254 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # export AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@256 -- # AR_TOOL=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/ar-xnvme-fixer 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@259 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@262 -- # '[' 0 -eq 0 ']' 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # export valgrind= 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@263 -- # valgrind= 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # uname -s 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@269 -- # '[' Linux = Linux ']' 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@270 -- # HUGEMEM=4096 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # export CLEAR_HUGE=yes 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@271 -- # CLEAR_HUGE=yes 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@272 -- # [[ 0 -eq 1 ]] 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@279 -- # MAKE=make 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@280 -- # MAKEFLAGS=-j96 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # export HUGEMEM=4096 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@296 -- # HUGEMEM=4096 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@298 -- # NO_HUGE=() 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@299 -- # TEST_MODE= 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@300 -- # for i in "$@" 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@301 -- # case "$i" in 00:11:29.793 11:57:16 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@306 -- # TEST_TRANSPORT=tcp 00:11:29.793 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # [[ -z 246728 ]] 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@318 -- # kill -0 246728 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1680 -- # set_test_storage 2147483648 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@328 -- # [[ -v testdir ]] 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@330 -- # local requested_size=2147483648 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@331 -- # local mount target_dir 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@333 -- # local -A mounts fss sizes avails uses 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@334 -- # local source fs size avail mount use 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@336 -- # local storage_fallback storage_candidates 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # mktemp -udt spdk.XXXXXX 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@338 -- # storage_fallback=/tmp/spdk.5mAwly 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@343 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@345 -- # [[ -n '' ]] 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@350 -- # [[ -n '' ]] 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@355 -- # mkdir -p /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target /tmp/spdk.5mAwly/tests/target /tmp/spdk.5mAwly 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@358 -- # requested_size=2214592512 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # df -T 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@327 -- # grep -v Filesystem 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_devtmpfs 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=devtmpfs 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=67108864 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=67108864 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=0 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=/dev/pmem0 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=ext2 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=950202368 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=5284429824 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4334227456 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=spdk_root 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=overlay 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=185137393664 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=195974283264 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=10836889600 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97924960256 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987141632 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=62181376 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=39171829760 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=39194857472 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=23027712 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=97984335872 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=97987141632 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=2805760 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # mounts["$mount"]=tmpfs 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@361 -- # fss["$mount"]=tmpfs 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # avails["$mount"]=19597422592 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@362 -- # sizes["$mount"]=19597426688 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@363 -- # uses["$mount"]=4096 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@360 -- # read -r source fs size use avail _ mount 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@366 -- # printf '* Looking for test storage...\n' 00:11:29.794 * Looking for test storage... 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@368 -- # local target_space new_size 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@369 -- # for target_dir in "${storage_candidates[@]}" 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # df /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:29.794 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:30.052 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@372 -- # mount=/ 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@374 -- # target_space=185137393664 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@375 -- # (( target_space == 0 || target_space < requested_size )) 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@378 -- # (( target_space >= requested_size )) 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == tmpfs ]] 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ overlay == ramfs ]] 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@380 -- # [[ / == / ]] 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@381 -- # new_size=13051482112 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@382 -- # (( new_size * 100 / sizes[/] > 95 )) 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # export SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@387 -- # SPDK_TEST_STORAGE=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@388 -- # printf '* Found test storage at %s\n' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.053 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@389 -- # return 0 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1682 -- # set -o errtrace 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1683 -- # shopt -s extdebug 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1684 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1686 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1687 -- # true 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1689 -- # xtrace_fd 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -n 15 ]] 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/15 ]] 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@27 -- # exec 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@29 -- # exec 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@18 -- # set -x 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # uname -s 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@5 -- # export PATH 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@47 -- # : 0 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@51 -- # have_pci_nics=0 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@12 -- # MALLOC_BDEV_SIZE=512 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@15 -- # nvmftestinit 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@448 -- # prepare_net_devs 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@410 -- # local -g is_hw=no 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@412 -- # remove_spdk_ns 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@285 -- # xtrace_disable 00:11:30.053 11:57:17 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # pci_devs=() 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@291 -- # local -a pci_devs 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # pci_net_devs=() 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # pci_drivers=() 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@293 -- # local -A pci_drivers 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # net_devs=() 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@295 -- # local -ga net_devs 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # e810=() 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@296 -- # local -ga e810 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # x722=() 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@297 -- # local -ga x722 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # mlx=() 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@298 -- # local -ga mlx 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:11:35.336 Found 0000:86:00.0 (0x8086 - 0x159b) 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:11:35.336 Found 0000:86:00.1 (0x8086 - 0x159b) 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:11:35.336 Found net devices under 0000:86:00.0: cvl_0_0 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:11:35.336 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@390 -- # [[ up == up ]] 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:11:35.337 Found net devices under 0000:86:00.1: cvl_0_1 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@414 -- # is_hw=yes 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:11:35.337 11:57:21 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:11:35.337 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:11:35.337 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.193 ms 00:11:35.337 00:11:35.337 --- 10.0.0.2 ping statistics --- 00:11:35.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.337 rtt min/avg/max/mdev = 0.193/0.193/0.193/0.000 ms 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:11:35.337 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:11:35.337 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.442 ms 00:11:35.337 00:11:35.337 --- 10.0.0.1 ping statistics --- 00:11:35.337 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:11:35.337 rtt min/avg/max/mdev = 0.442/0.442/0.442/0.000 ms 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@422 -- # return 0 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@105 -- # run_test nvmf_filesystem_no_in_capsule nvmf_filesystem_part 0 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:35.337 ************************************ 00:11:35.337 START TEST nvmf_filesystem_no_in_capsule 00:11:35.337 ************************************ 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 0 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@47 -- # in_capsule=0 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=249747 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 249747 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 249747 ']' 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:35.337 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:35.337 11:57:22 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:35.337 [2024-07-25 11:57:22.320116] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:11:35.337 [2024-07-25 11:57:22.320161] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:35.337 EAL: No free 2048 kB hugepages reported on node 1 00:11:35.337 [2024-07-25 11:57:22.377074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:35.337 [2024-07-25 11:57:22.458046] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:35.337 [2024-07-25 11:57:22.458083] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:35.337 [2024-07-25 11:57:22.458090] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:35.337 [2024-07-25 11:57:22.458097] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:35.337 [2024-07-25 11:57:22.458101] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:35.337 [2024-07-25 11:57:22.458141] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:35.337 [2024-07-25 11:57:22.458240] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:35.337 [2024-07-25 11:57:22.458323] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:35.337 [2024-07-25 11:57:22.458324] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:35.905 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:35.905 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:11:35.905 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:35.905 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:35.905 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.165 [2024-07-25 11:57:23.170321] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.165 Malloc1 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.165 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.166 [2024-07-25 11:57:23.314320] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:36.166 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.166 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:36.166 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:36.166 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:36.166 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:36.166 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:36.166 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:36.166 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:36.166 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:36.166 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:36.166 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:36.166 { 00:11:36.166 "name": "Malloc1", 00:11:36.166 "aliases": [ 00:11:36.166 "7d105d5c-b864-456a-b927-912e2a36fe3a" 00:11:36.166 ], 00:11:36.166 "product_name": "Malloc disk", 00:11:36.166 "block_size": 512, 00:11:36.166 "num_blocks": 1048576, 00:11:36.166 "uuid": "7d105d5c-b864-456a-b927-912e2a36fe3a", 00:11:36.166 "assigned_rate_limits": { 00:11:36.166 "rw_ios_per_sec": 0, 00:11:36.166 "rw_mbytes_per_sec": 0, 00:11:36.166 "r_mbytes_per_sec": 0, 00:11:36.166 "w_mbytes_per_sec": 0 00:11:36.166 }, 00:11:36.166 "claimed": true, 00:11:36.166 "claim_type": "exclusive_write", 00:11:36.166 "zoned": false, 00:11:36.166 "supported_io_types": { 00:11:36.166 "read": true, 00:11:36.166 "write": true, 00:11:36.166 "unmap": true, 00:11:36.166 "flush": true, 00:11:36.166 "reset": true, 00:11:36.166 "nvme_admin": false, 00:11:36.166 "nvme_io": false, 00:11:36.166 "nvme_io_md": false, 00:11:36.166 "write_zeroes": true, 00:11:36.166 "zcopy": true, 00:11:36.166 "get_zone_info": false, 00:11:36.166 "zone_management": false, 00:11:36.166 "zone_append": false, 00:11:36.166 "compare": false, 00:11:36.166 "compare_and_write": false, 00:11:36.166 "abort": true, 00:11:36.166 "seek_hole": false, 00:11:36.166 "seek_data": false, 00:11:36.166 "copy": true, 00:11:36.166 "nvme_iov_md": false 00:11:36.166 }, 00:11:36.166 "memory_domains": [ 00:11:36.166 { 00:11:36.166 "dma_device_id": "system", 00:11:36.166 "dma_device_type": 1 00:11:36.166 }, 00:11:36.166 { 00:11:36.166 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:36.166 "dma_device_type": 2 00:11:36.166 } 00:11:36.166 ], 00:11:36.166 "driver_specific": {} 00:11:36.166 } 00:11:36.166 ]' 00:11:36.166 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:36.166 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:36.166 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:36.425 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:36.425 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:36.425 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:36.425 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:36.425 11:57:23 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:37.363 11:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:37.363 11:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:37.363 11:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:37.363 11:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:37.363 11:57:24 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:39.902 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:39.902 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:39.902 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:39.902 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:39.902 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:39.902 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:39.902 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:39.902 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:39.902 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:39.902 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:39.902 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:39.902 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:39.902 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:39.902 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:39.902 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:39.902 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:39.902 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:39.902 11:57:26 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:40.162 11:57:27 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:41.101 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@76 -- # '[' 0 -eq 0 ']' 00:11:41.101 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@77 -- # run_test filesystem_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:41.101 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:41.101 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:41.101 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:41.361 ************************************ 00:11:41.361 START TEST filesystem_ext4 00:11:41.361 ************************************ 00:11:41.361 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:41.361 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:41.361 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:41.361 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:41.361 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:11:41.361 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:41.361 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:11:41.361 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@927 -- # local force 00:11:41.361 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:11:41.361 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:11:41.361 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:41.361 mke2fs 1.46.5 (30-Dec-2021) 00:11:41.361 Discarding device blocks: 0/522240 done 00:11:41.361 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:41.361 Filesystem UUID: a3b984b0-c2c6-483a-98b3-d0514bf69449 00:11:41.361 Superblock backups stored on blocks: 00:11:41.361 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:41.361 00:11:41.361 Allocating group tables: 0/64 done 00:11:41.361 Writing inode tables: 0/64 done 00:11:41.361 Creating journal (8192 blocks): done 00:11:41.361 Writing superblocks and filesystem accounting information: 0/64 done 00:11:41.361 00:11:41.361 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@943 -- # return 0 00:11:41.361 11:57:28 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@25 -- # sync 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@27 -- # sync 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@37 -- # kill -0 249747 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:42.300 00:11:42.300 real 0m1.051s 00:11:42.300 user 0m0.019s 00:11:42.300 sys 0m0.049s 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:42.300 ************************************ 00:11:42.300 END TEST filesystem_ext4 00:11:42.300 ************************************ 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@78 -- # run_test filesystem_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:42.300 ************************************ 00:11:42.300 START TEST filesystem_btrfs 00:11:42.300 ************************************ 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@927 -- # local force 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:11:42.300 11:57:29 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:42.869 btrfs-progs v6.6.2 00:11:42.869 See https://btrfs.readthedocs.io for more information. 00:11:42.869 00:11:42.869 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:42.869 NOTE: several default settings have changed in version 5.15, please make sure 00:11:42.869 this does not affect your deployments: 00:11:42.869 - DUP for metadata (-m dup) 00:11:42.869 - enabled no-holes (-O no-holes) 00:11:42.869 - enabled free-space-tree (-R free-space-tree) 00:11:42.869 00:11:42.869 Label: (null) 00:11:42.869 UUID: 67f575b1-eb8f-48b8-bd50-75838c18ddb4 00:11:42.869 Node size: 16384 00:11:42.869 Sector size: 4096 00:11:42.869 Filesystem size: 510.00MiB 00:11:42.869 Block group profiles: 00:11:42.869 Data: single 8.00MiB 00:11:42.869 Metadata: DUP 32.00MiB 00:11:42.869 System: DUP 8.00MiB 00:11:42.869 SSD detected: yes 00:11:42.869 Zoned device: no 00:11:42.869 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:42.869 Runtime features: free-space-tree 00:11:42.869 Checksum: crc32c 00:11:42.869 Number of devices: 1 00:11:42.869 Devices: 00:11:42.869 ID SIZE PATH 00:11:42.869 1 510.00MiB /dev/nvme0n1p1 00:11:42.869 00:11:42.869 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@943 -- # return 0 00:11:42.869 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:43.807 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:43.807 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@25 -- # sync 00:11:43.807 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:43.807 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@27 -- # sync 00:11:43.807 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:43.807 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:43.807 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@37 -- # kill -0 249747 00:11:43.807 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:43.807 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:43.807 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:43.807 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:43.807 00:11:43.807 real 0m1.457s 00:11:43.807 user 0m0.022s 00:11:43.807 sys 0m0.066s 00:11:43.807 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:43.807 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:43.807 ************************************ 00:11:43.807 END TEST filesystem_btrfs 00:11:43.807 ************************************ 00:11:43.807 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:11:43.807 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@79 -- # run_test filesystem_xfs nvmf_filesystem_create xfs nvme0n1 00:11:43.807 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:43.807 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:43.807 11:57:30 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:43.807 ************************************ 00:11:43.807 START TEST filesystem_xfs 00:11:43.807 ************************************ 00:11:43.807 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:11:43.807 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:43.808 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:43.808 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:43.808 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:11:43.808 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:43.808 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@926 -- # local i=0 00:11:43.808 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@927 -- # local force 00:11:43.808 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:11:43.808 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@932 -- # force=-f 00:11:43.808 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:44.067 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:44.067 = sectsz=512 attr=2, projid32bit=1 00:11:44.067 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:44.067 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:44.067 data = bsize=4096 blocks=130560, imaxpct=25 00:11:44.067 = sunit=0 swidth=0 blks 00:11:44.067 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:44.067 log =internal log bsize=4096 blocks=16384, version=2 00:11:44.067 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:44.067 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:45.004 Discarding blocks...Done. 00:11:45.004 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@943 -- # return 0 00:11:45.004 11:57:31 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:46.907 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:46.907 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@25 -- # sync 00:11:46.907 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:46.907 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@27 -- # sync 00:11:46.907 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@29 -- # i=0 00:11:46.908 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:46.908 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@37 -- # kill -0 249747 00:11:46.908 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:46.908 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:46.908 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:46.908 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:46.908 00:11:46.908 real 0m2.861s 00:11:46.908 user 0m0.020s 00:11:46.908 sys 0m0.052s 00:11:46.908 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:46.908 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule.filesystem_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:46.908 ************************************ 00:11:46.908 END TEST filesystem_xfs 00:11:46.908 ************************************ 00:11:46.908 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:11:46.908 11:57:33 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:47.169 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@101 -- # killprocess 249747 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 249747 ']' 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@952 -- # kill -0 249747 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # uname 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 249747 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 249747' 00:11:47.169 killing process with pid 249747 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@967 -- # kill 249747 00:11:47.169 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@972 -- # wait 249747 00:11:47.429 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:11:47.430 00:11:47.430 real 0m12.415s 00:11:47.430 user 0m48.704s 00:11:47.430 sys 0m1.126s 00:11:47.430 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:47.430 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_no_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.430 ************************************ 00:11:47.430 END TEST nvmf_filesystem_no_in_capsule 00:11:47.430 ************************************ 00:11:47.689 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:11:47.689 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@106 -- # run_test nvmf_filesystem_in_capsule nvmf_filesystem_part 4096 00:11:47.689 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:11:47.689 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:47.689 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:11:47.689 ************************************ 00:11:47.689 START TEST nvmf_filesystem_in_capsule 00:11:47.689 ************************************ 00:11:47.689 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1123 -- # nvmf_filesystem_part 4096 00:11:47.690 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@47 -- # in_capsule=4096 00:11:47.690 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@49 -- # nvmfappstart -m 0xF 00:11:47.690 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:11:47.690 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@722 -- # xtrace_disable 00:11:47.690 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.690 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@481 -- # nvmfpid=252036 00:11:47.690 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@482 -- # waitforlisten 252036 00:11:47.690 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:11:47.690 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@829 -- # '[' -z 252036 ']' 00:11:47.690 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:47.690 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@834 -- # local max_retries=100 00:11:47.690 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:47.690 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:47.690 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@838 -- # xtrace_disable 00:11:47.690 11:57:34 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:47.690 [2024-07-25 11:57:34.795259] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:11:47.690 [2024-07-25 11:57:34.795300] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:11:47.690 EAL: No free 2048 kB hugepages reported on node 1 00:11:47.690 [2024-07-25 11:57:34.851885] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:11:47.690 [2024-07-25 11:57:34.932330] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:11:47.690 [2024-07-25 11:57:34.932366] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:11:47.690 [2024-07-25 11:57:34.932374] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:11:47.690 [2024-07-25 11:57:34.932380] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:11:47.690 [2024-07-25 11:57:34.932385] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:11:47.690 [2024-07-25 11:57:34.932434] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:11:47.690 [2024-07-25 11:57:34.932531] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:11:47.690 [2024-07-25 11:57:34.932616] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:11:47.690 [2024-07-25 11:57:34.932618] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@862 -- # return 0 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@728 -- # xtrace_disable 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@50 -- # malloc_name=Malloc1 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@52 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 4096 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.628 [2024-07-25 11:57:35.656361] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@53 -- # rpc_cmd bdev_malloc_create 512 512 -b Malloc1 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.628 Malloc1 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@54 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@55 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@56 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.628 [2024-07-25 11:57:35.802692] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # get_bdev_size Malloc1 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1378 -- # local bdev_name=Malloc1 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1379 -- # local bdev_info 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1380 -- # local bs 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1381 -- # local nb 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # rpc_cmd bdev_get_bdevs -b Malloc1 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1382 -- # bdev_info='[ 00:11:48.628 { 00:11:48.628 "name": "Malloc1", 00:11:48.628 "aliases": [ 00:11:48.628 "40901105-9b08-4fbc-875d-69f3409d4fe2" 00:11:48.628 ], 00:11:48.628 "product_name": "Malloc disk", 00:11:48.628 "block_size": 512, 00:11:48.628 "num_blocks": 1048576, 00:11:48.628 "uuid": "40901105-9b08-4fbc-875d-69f3409d4fe2", 00:11:48.628 "assigned_rate_limits": { 00:11:48.628 "rw_ios_per_sec": 0, 00:11:48.628 "rw_mbytes_per_sec": 0, 00:11:48.628 "r_mbytes_per_sec": 0, 00:11:48.628 "w_mbytes_per_sec": 0 00:11:48.628 }, 00:11:48.628 "claimed": true, 00:11:48.628 "claim_type": "exclusive_write", 00:11:48.628 "zoned": false, 00:11:48.628 "supported_io_types": { 00:11:48.628 "read": true, 00:11:48.628 "write": true, 00:11:48.628 "unmap": true, 00:11:48.628 "flush": true, 00:11:48.628 "reset": true, 00:11:48.628 "nvme_admin": false, 00:11:48.628 "nvme_io": false, 00:11:48.628 "nvme_io_md": false, 00:11:48.628 "write_zeroes": true, 00:11:48.628 "zcopy": true, 00:11:48.628 "get_zone_info": false, 00:11:48.628 "zone_management": false, 00:11:48.628 "zone_append": false, 00:11:48.628 "compare": false, 00:11:48.628 "compare_and_write": false, 00:11:48.628 "abort": true, 00:11:48.628 "seek_hole": false, 00:11:48.628 "seek_data": false, 00:11:48.628 "copy": true, 00:11:48.628 "nvme_iov_md": false 00:11:48.628 }, 00:11:48.628 "memory_domains": [ 00:11:48.628 { 00:11:48.628 "dma_device_id": "system", 00:11:48.628 "dma_device_type": 1 00:11:48.628 }, 00:11:48.628 { 00:11:48.628 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:11:48.628 "dma_device_type": 2 00:11:48.628 } 00:11:48.628 ], 00:11:48.628 "driver_specific": {} 00:11:48.628 } 00:11:48.628 ]' 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # jq '.[] .block_size' 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1383 -- # bs=512 00:11:48.628 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # jq '.[] .num_blocks' 00:11:48.918 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1384 -- # nb=1048576 00:11:48.918 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1387 -- # bdev_size=512 00:11:48.918 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1388 -- # echo 512 00:11:48.918 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@58 -- # malloc_size=536870912 00:11:48.918 11:57:35 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@60 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:11:49.857 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@62 -- # waitforserial SPDKISFASTANDAWESOME 00:11:49.857 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1198 -- # local i=0 00:11:49.857 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:11:49.857 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:11:49.857 11:57:37 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1205 -- # sleep 2 00:11:52.393 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:11:52.393 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:11:52.393 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:11:52.393 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:11:52.393 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:11:52.393 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1208 -- # return 0 00:11:52.393 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # lsblk -l -o NAME,SERIAL 00:11:52.393 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # grep -oP '([\w]*)(?=\s+SPDKISFASTANDAWESOME)' 00:11:52.393 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@63 -- # nvme_name=nvme0n1 00:11:52.393 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # sec_size_to_bytes nvme0n1 00:11:52.393 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@76 -- # local dev=nvme0n1 00:11:52.393 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@78 -- # [[ -e /sys/block/nvme0n1 ]] 00:11:52.393 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- setup/common.sh@80 -- # echo 536870912 00:11:52.393 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@64 -- # nvme_size=536870912 00:11:52.393 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@66 -- # mkdir -p /mnt/device 00:11:52.393 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@67 -- # (( nvme_size == malloc_size )) 00:11:52.393 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@68 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST 0% 100% 00:11:52.393 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@69 -- # partprobe 00:11:52.652 11:57:39 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@70 -- # sleep 1 00:11:53.589 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@76 -- # '[' 4096 -eq 0 ']' 00:11:53.589 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@81 -- # run_test filesystem_in_capsule_ext4 nvmf_filesystem_create ext4 nvme0n1 00:11:53.589 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:53.589 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:53.589 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:53.589 ************************************ 00:11:53.589 START TEST filesystem_in_capsule_ext4 00:11:53.589 ************************************ 00:11:53.589 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create ext4 nvme0n1 00:11:53.589 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@18 -- # fstype=ext4 00:11:53.589 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:53.589 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@21 -- # make_filesystem ext4 /dev/nvme0n1p1 00:11:53.589 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@924 -- # local fstype=ext4 00:11:53.589 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:53.589 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@926 -- # local i=0 00:11:53.589 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@927 -- # local force 00:11:53.589 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@929 -- # '[' ext4 = ext4 ']' 00:11:53.589 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@930 -- # force=-F 00:11:53.589 11:57:40 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@935 -- # mkfs.ext4 -F /dev/nvme0n1p1 00:11:53.589 mke2fs 1.46.5 (30-Dec-2021) 00:11:53.849 Discarding device blocks: 0/522240 done 00:11:53.849 Creating filesystem with 522240 1k blocks and 130560 inodes 00:11:53.849 Filesystem UUID: 3b9c4329-e252-4fcf-85fe-3ee2b81c86e6 00:11:53.849 Superblock backups stored on blocks: 00:11:53.849 8193, 24577, 40961, 57345, 73729, 204801, 221185, 401409 00:11:53.849 00:11:53.849 Allocating group tables: 0/64 done 00:11:53.849 Writing inode tables: 0/64 done 00:11:53.849 Creating journal (8192 blocks): done 00:11:55.045 Writing superblocks and filesystem accounting information: 0/64 done 00:11:55.045 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@943 -- # return 0 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@25 -- # sync 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@27 -- # sync 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@29 -- # i=0 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@37 -- # kill -0 252036 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:55.045 00:11:55.045 real 0m1.459s 00:11:55.045 user 0m0.025s 00:11:55.045 sys 0m0.044s 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_ext4 -- common/autotest_common.sh@10 -- # set +x 00:11:55.045 ************************************ 00:11:55.045 END TEST filesystem_in_capsule_ext4 00:11:55.045 ************************************ 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@82 -- # run_test filesystem_in_capsule_btrfs nvmf_filesystem_create btrfs nvme0n1 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:55.045 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:55.305 ************************************ 00:11:55.305 START TEST filesystem_in_capsule_btrfs 00:11:55.305 ************************************ 00:11:55.305 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create btrfs nvme0n1 00:11:55.305 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@18 -- # fstype=btrfs 00:11:55.305 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:55.305 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@21 -- # make_filesystem btrfs /dev/nvme0n1p1 00:11:55.305 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@924 -- # local fstype=btrfs 00:11:55.305 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:55.305 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@926 -- # local i=0 00:11:55.305 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@927 -- # local force 00:11:55.305 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@929 -- # '[' btrfs = ext4 ']' 00:11:55.305 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@932 -- # force=-f 00:11:55.305 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@935 -- # mkfs.btrfs -f /dev/nvme0n1p1 00:11:55.564 btrfs-progs v6.6.2 00:11:55.564 See https://btrfs.readthedocs.io for more information. 00:11:55.564 00:11:55.564 Performing full device TRIM /dev/nvme0n1p1 (510.00MiB) ... 00:11:55.564 NOTE: several default settings have changed in version 5.15, please make sure 00:11:55.564 this does not affect your deployments: 00:11:55.564 - DUP for metadata (-m dup) 00:11:55.564 - enabled no-holes (-O no-holes) 00:11:55.564 - enabled free-space-tree (-R free-space-tree) 00:11:55.564 00:11:55.564 Label: (null) 00:11:55.564 UUID: 1dbfddf2-845d-4ade-9222-9b4499ff5b07 00:11:55.564 Node size: 16384 00:11:55.564 Sector size: 4096 00:11:55.564 Filesystem size: 510.00MiB 00:11:55.564 Block group profiles: 00:11:55.564 Data: single 8.00MiB 00:11:55.564 Metadata: DUP 32.00MiB 00:11:55.564 System: DUP 8.00MiB 00:11:55.564 SSD detected: yes 00:11:55.564 Zoned device: no 00:11:55.564 Incompat features: extref, skinny-metadata, no-holes, free-space-tree 00:11:55.564 Runtime features: free-space-tree 00:11:55.564 Checksum: crc32c 00:11:55.564 Number of devices: 1 00:11:55.564 Devices: 00:11:55.564 ID SIZE PATH 00:11:55.564 1 510.00MiB /dev/nvme0n1p1 00:11:55.564 00:11:55.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@943 -- # return 0 00:11:55.564 11:57:42 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:55.824 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:55.824 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@25 -- # sync 00:11:55.824 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:55.824 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@27 -- # sync 00:11:55.824 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@29 -- # i=0 00:11:55.824 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:55.824 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@37 -- # kill -0 252036 00:11:55.824 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:55.824 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:55.824 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:56.084 00:11:56.084 real 0m0.759s 00:11:56.084 user 0m0.026s 00:11:56.084 sys 0m0.054s 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_btrfs -- common/autotest_common.sh@10 -- # set +x 00:11:56.084 ************************************ 00:11:56.084 END TEST filesystem_in_capsule_btrfs 00:11:56.084 ************************************ 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@83 -- # run_test filesystem_in_capsule_xfs nvmf_filesystem_create xfs nvme0n1 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1105 -- # xtrace_disable 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:56.084 ************************************ 00:11:56.084 START TEST filesystem_in_capsule_xfs 00:11:56.084 ************************************ 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1123 -- # nvmf_filesystem_create xfs nvme0n1 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@18 -- # fstype=xfs 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@19 -- # nvme_name=nvme0n1 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@21 -- # make_filesystem xfs /dev/nvme0n1p1 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@924 -- # local fstype=xfs 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@925 -- # local dev_name=/dev/nvme0n1p1 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@926 -- # local i=0 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@927 -- # local force 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@929 -- # '[' xfs = ext4 ']' 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@932 -- # force=-f 00:11:56.084 11:57:43 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@935 -- # mkfs.xfs -f /dev/nvme0n1p1 00:11:56.084 meta-data=/dev/nvme0n1p1 isize=512 agcount=4, agsize=32640 blks 00:11:56.084 = sectsz=512 attr=2, projid32bit=1 00:11:56.084 = crc=1 finobt=1, sparse=1, rmapbt=0 00:11:56.084 = reflink=1 bigtime=1 inobtcount=1 nrext64=0 00:11:56.084 data = bsize=4096 blocks=130560, imaxpct=25 00:11:56.084 = sunit=0 swidth=0 blks 00:11:56.084 naming =version 2 bsize=4096 ascii-ci=0, ftype=1 00:11:56.084 log =internal log bsize=4096 blocks=16384, version=2 00:11:56.084 = sectsz=512 sunit=0 blks, lazy-count=1 00:11:56.084 realtime =none extsz=4096 blocks=0, rtextents=0 00:11:57.033 Discarding blocks...Done. 00:11:57.033 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@943 -- # return 0 00:11:57.033 11:57:44 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@23 -- # mount /dev/nvme0n1p1 /mnt/device 00:11:59.568 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@24 -- # touch /mnt/device/aaa 00:11:59.568 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@25 -- # sync 00:11:59.568 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@26 -- # rm /mnt/device/aaa 00:11:59.569 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@27 -- # sync 00:11:59.569 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@29 -- # i=0 00:11:59.569 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@30 -- # umount /mnt/device 00:11:59.569 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@37 -- # kill -0 252036 00:11:59.569 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # lsblk -l -o NAME 00:11:59.569 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@40 -- # grep -q -w nvme0n1 00:11:59.569 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # lsblk -l -o NAME 00:11:59.569 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- target/filesystem.sh@43 -- # grep -q -w nvme0n1p1 00:11:59.569 00:11:59.569 real 0m3.490s 00:11:59.569 user 0m0.025s 00:11:59.569 sys 0m0.048s 00:11:59.569 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@1124 -- # xtrace_disable 00:11:59.569 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule.filesystem_in_capsule_xfs -- common/autotest_common.sh@10 -- # set +x 00:11:59.569 ************************************ 00:11:59.569 END TEST filesystem_in_capsule_xfs 00:11:59.569 ************************************ 00:11:59.569 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1142 -- # return 0 00:11:59.569 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@91 -- # flock /dev/nvme0n1 parted -s /dev/nvme0n1 rm 1 00:11:59.569 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@93 -- # sync 00:11:59.569 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@94 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:11:59.830 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@95 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1219 -- # local i=0 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1231 -- # return 0 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@97 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@559 -- # xtrace_disable 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@101 -- # killprocess 252036 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@948 -- # '[' -z 252036 ']' 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@952 -- # kill -0 252036 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # uname 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 252036 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@966 -- # echo 'killing process with pid 252036' 00:11:59.830 killing process with pid 252036 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@967 -- # kill 252036 00:11:59.830 11:57:46 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@972 -- # wait 252036 00:12:00.091 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- target/filesystem.sh@102 -- # nvmfpid= 00:12:00.091 00:12:00.091 real 0m12.557s 00:12:00.091 user 0m49.333s 00:12:00.091 sys 0m1.075s 00:12:00.091 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:00.091 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem.nvmf_filesystem_in_capsule -- common/autotest_common.sh@10 -- # set +x 00:12:00.091 ************************************ 00:12:00.091 END TEST nvmf_filesystem_in_capsule 00:12:00.091 ************************************ 00:12:00.091 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1142 -- # return 0 00:12:00.091 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- target/filesystem.sh@108 -- # nvmftestfini 00:12:00.091 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:00.091 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@117 -- # sync 00:12:00.091 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:00.091 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@120 -- # set +e 00:12:00.091 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:00.091 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:00.351 rmmod nvme_tcp 00:12:00.351 rmmod nvme_fabrics 00:12:00.351 rmmod nvme_keyring 00:12:00.351 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:00.351 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@124 -- # set -e 00:12:00.351 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@125 -- # return 0 00:12:00.351 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:12:00.351 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:00.351 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:00.351 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:00.351 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:00.351 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:00.351 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:00.351 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:00.351 11:57:47 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.261 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:02.261 00:12:02.261 real 0m32.618s 00:12:02.261 user 1m39.579s 00:12:02.261 sys 0m6.304s 00:12:02.261 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:02.261 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_filesystem -- common/autotest_common.sh@10 -- # set +x 00:12:02.261 ************************************ 00:12:02.261 END TEST nvmf_filesystem 00:12:02.261 ************************************ 00:12:02.261 11:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:12:02.261 11:57:49 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@18 -- # run_test nvmf_target_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:02.261 11:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:02.261 11:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:02.261 11:57:49 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:02.261 ************************************ 00:12:02.261 START TEST nvmf_target_discovery 00:12:02.261 ************************************ 00:12:02.261 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/discovery.sh --transport=tcp 00:12:02.520 * Looking for test storage... 00:12:02.520 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # uname -s 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:02.520 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@5 -- # export PATH 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@47 -- # : 0 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@11 -- # NULL_BDEV_SIZE=102400 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@12 -- # NULL_BLOCK_SIZE=512 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@13 -- # NVMF_PORT_REFERRAL=4430 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@15 -- # hash nvme 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@20 -- # nvmftestinit 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:12:02.521 11:57:49 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # e810=() 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # x722=() 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # mlx=() 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:07.794 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:07.795 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:07.795 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:07.795 Found net devices under 0000:86:00.0: cvl_0_0 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:07.795 Found net devices under 0000:86:00.1: cvl_0_1 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:07.795 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:07.795 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.182 ms 00:12:07.795 00:12:07.795 --- 10.0.0.2 ping statistics --- 00:12:07.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.795 rtt min/avg/max/mdev = 0.182/0.182/0.182/0.000 ms 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:07.795 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:07.795 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.427 ms 00:12:07.795 00:12:07.795 --- 10.0.0.1 ping statistics --- 00:12:07.795 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:07.795 rtt min/avg/max/mdev = 0.427/0.427/0.427/0.000 ms 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@422 -- # return 0 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@21 -- # nvmfappstart -m 0xF 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@481 -- # nvmfpid=257623 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@482 -- # waitforlisten 257623 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@829 -- # '[' -z 257623 ']' 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:07.795 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:07.795 11:57:54 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:07.795 [2024-07-25 11:57:55.018538] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:12:07.795 [2024-07-25 11:57:55.018580] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:08.056 EAL: No free 2048 kB hugepages reported on node 1 00:12:08.056 [2024-07-25 11:57:55.075583] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:08.056 [2024-07-25 11:57:55.157895] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:08.056 [2024-07-25 11:57:55.157931] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:08.056 [2024-07-25 11:57:55.157939] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:08.056 [2024-07-25 11:57:55.157945] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:08.056 [2024-07-25 11:57:55.157950] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:08.056 [2024-07-25 11:57:55.157993] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:08.056 [2024-07-25 11:57:55.158093] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:08.056 [2024-07-25 11:57:55.158117] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:08.056 [2024-07-25 11:57:55.158119] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.626 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:08.626 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@862 -- # return 0 00:12:08.626 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:08.626 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:08.626 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@23 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 [2024-07-25 11:57:55.882571] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # seq 1 4 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null1 102400 512 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 Null1 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Null1 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 [2024-07-25 11:57:55.927999] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null2 102400 512 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 Null2 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Null2 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null3 102400 512 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 Null3 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode3 -a -s SPDK00000000000003 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode3 Null3 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode3 -t tcp -a 10.0.0.2 -s 4420 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@26 -- # for i in $(seq 1 4) 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@27 -- # rpc_cmd bdev_null_create Null4 102400 512 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.887 11:57:55 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 Null4 00:12:08.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode4 -a -s SPDK00000000000004 00:12:08.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@29 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode4 Null4 00:12:08.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@30 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode4 -t tcp -a 10.0.0.2 -s 4420 00:12:08.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@32 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:12:08.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.887 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.888 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@35 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 10.0.0.2 -s 4430 00:12:08.888 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.888 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.888 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:08.888 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@37 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:12:08.888 00:12:08.888 Discovery Log Number of Records 6, Generation counter 6 00:12:08.888 =====Discovery Log Entry 0====== 00:12:08.888 trtype: tcp 00:12:08.888 adrfam: ipv4 00:12:08.888 subtype: current discovery subsystem 00:12:08.888 treq: not required 00:12:08.888 portid: 0 00:12:08.888 trsvcid: 4420 00:12:08.888 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:08.888 traddr: 10.0.0.2 00:12:08.888 eflags: explicit discovery connections, duplicate discovery information 00:12:08.888 sectype: none 00:12:08.888 =====Discovery Log Entry 1====== 00:12:08.888 trtype: tcp 00:12:08.888 adrfam: ipv4 00:12:08.888 subtype: nvme subsystem 00:12:08.888 treq: not required 00:12:08.888 portid: 0 00:12:08.888 trsvcid: 4420 00:12:08.888 subnqn: nqn.2016-06.io.spdk:cnode1 00:12:08.888 traddr: 10.0.0.2 00:12:08.888 eflags: none 00:12:08.888 sectype: none 00:12:08.888 =====Discovery Log Entry 2====== 00:12:08.888 trtype: tcp 00:12:08.888 adrfam: ipv4 00:12:08.888 subtype: nvme subsystem 00:12:08.888 treq: not required 00:12:08.888 portid: 0 00:12:08.888 trsvcid: 4420 00:12:08.888 subnqn: nqn.2016-06.io.spdk:cnode2 00:12:08.888 traddr: 10.0.0.2 00:12:08.888 eflags: none 00:12:08.888 sectype: none 00:12:08.888 =====Discovery Log Entry 3====== 00:12:08.888 trtype: tcp 00:12:08.888 adrfam: ipv4 00:12:08.888 subtype: nvme subsystem 00:12:08.888 treq: not required 00:12:08.888 portid: 0 00:12:08.888 trsvcid: 4420 00:12:08.888 subnqn: nqn.2016-06.io.spdk:cnode3 00:12:08.888 traddr: 10.0.0.2 00:12:08.888 eflags: none 00:12:08.888 sectype: none 00:12:08.888 =====Discovery Log Entry 4====== 00:12:08.888 trtype: tcp 00:12:08.888 adrfam: ipv4 00:12:08.888 subtype: nvme subsystem 00:12:08.888 treq: not required 00:12:08.888 portid: 0 00:12:08.888 trsvcid: 4420 00:12:08.888 subnqn: nqn.2016-06.io.spdk:cnode4 00:12:08.888 traddr: 10.0.0.2 00:12:08.888 eflags: none 00:12:08.888 sectype: none 00:12:08.888 =====Discovery Log Entry 5====== 00:12:08.888 trtype: tcp 00:12:08.888 adrfam: ipv4 00:12:08.888 subtype: discovery subsystem referral 00:12:08.888 treq: not required 00:12:08.888 portid: 0 00:12:08.888 trsvcid: 4430 00:12:08.888 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:12:08.888 traddr: 10.0.0.2 00:12:08.888 eflags: none 00:12:08.888 sectype: none 00:12:08.888 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@39 -- # echo 'Perform nvmf subsystem discovery via RPC' 00:12:08.888 Perform nvmf subsystem discovery via RPC 00:12:08.888 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@40 -- # rpc_cmd nvmf_get_subsystems 00:12:08.888 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:08.888 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:08.888 [ 00:12:08.888 { 00:12:08.888 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:12:08.888 "subtype": "Discovery", 00:12:08.888 "listen_addresses": [ 00:12:08.888 { 00:12:08.888 "trtype": "TCP", 00:12:08.888 "adrfam": "IPv4", 00:12:08.888 "traddr": "10.0.0.2", 00:12:08.888 "trsvcid": "4420" 00:12:08.888 } 00:12:08.888 ], 00:12:08.888 "allow_any_host": true, 00:12:08.888 "hosts": [] 00:12:08.888 }, 00:12:08.888 { 00:12:08.888 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:12:08.888 "subtype": "NVMe", 00:12:08.888 "listen_addresses": [ 00:12:08.888 { 00:12:08.888 "trtype": "TCP", 00:12:08.888 "adrfam": "IPv4", 00:12:08.888 "traddr": "10.0.0.2", 00:12:08.888 "trsvcid": "4420" 00:12:08.888 } 00:12:08.888 ], 00:12:08.888 "allow_any_host": true, 00:12:08.888 "hosts": [], 00:12:08.888 "serial_number": "SPDK00000000000001", 00:12:08.888 "model_number": "SPDK bdev Controller", 00:12:08.888 "max_namespaces": 32, 00:12:08.888 "min_cntlid": 1, 00:12:08.888 "max_cntlid": 65519, 00:12:08.888 "namespaces": [ 00:12:08.888 { 00:12:08.888 "nsid": 1, 00:12:08.888 "bdev_name": "Null1", 00:12:08.888 "name": "Null1", 00:12:08.888 "nguid": "9A72323A4783420EA43333C50347C943", 00:12:08.888 "uuid": "9a72323a-4783-420e-a433-33c50347c943" 00:12:08.888 } 00:12:08.888 ] 00:12:08.888 }, 00:12:08.888 { 00:12:08.888 "nqn": "nqn.2016-06.io.spdk:cnode2", 00:12:08.888 "subtype": "NVMe", 00:12:08.888 "listen_addresses": [ 00:12:08.888 { 00:12:08.888 "trtype": "TCP", 00:12:08.888 "adrfam": "IPv4", 00:12:08.888 "traddr": "10.0.0.2", 00:12:08.888 "trsvcid": "4420" 00:12:08.888 } 00:12:08.888 ], 00:12:08.888 "allow_any_host": true, 00:12:08.888 "hosts": [], 00:12:08.888 "serial_number": "SPDK00000000000002", 00:12:08.888 "model_number": "SPDK bdev Controller", 00:12:08.888 "max_namespaces": 32, 00:12:08.888 "min_cntlid": 1, 00:12:08.888 "max_cntlid": 65519, 00:12:08.888 "namespaces": [ 00:12:08.888 { 00:12:08.888 "nsid": 1, 00:12:08.888 "bdev_name": "Null2", 00:12:08.888 "name": "Null2", 00:12:08.888 "nguid": "FB251E7C012B47B18C319F89230DCE51", 00:12:08.888 "uuid": "fb251e7c-012b-47b1-8c31-9f89230dce51" 00:12:08.888 } 00:12:08.888 ] 00:12:08.888 }, 00:12:08.888 { 00:12:08.888 "nqn": "nqn.2016-06.io.spdk:cnode3", 00:12:08.888 "subtype": "NVMe", 00:12:08.888 "listen_addresses": [ 00:12:08.888 { 00:12:08.888 "trtype": "TCP", 00:12:08.888 "adrfam": "IPv4", 00:12:08.888 "traddr": "10.0.0.2", 00:12:08.888 "trsvcid": "4420" 00:12:08.888 } 00:12:08.888 ], 00:12:08.888 "allow_any_host": true, 00:12:08.888 "hosts": [], 00:12:08.888 "serial_number": "SPDK00000000000003", 00:12:08.888 "model_number": "SPDK bdev Controller", 00:12:08.888 "max_namespaces": 32, 00:12:08.888 "min_cntlid": 1, 00:12:08.888 "max_cntlid": 65519, 00:12:08.888 "namespaces": [ 00:12:08.888 { 00:12:08.888 "nsid": 1, 00:12:08.888 "bdev_name": "Null3", 00:12:08.888 "name": "Null3", 00:12:08.888 "nguid": "8D78809653ED4384950368996B0C46BE", 00:12:08.888 "uuid": "8d788096-53ed-4384-9503-68996b0c46be" 00:12:08.888 } 00:12:08.888 ] 00:12:08.888 }, 00:12:08.888 { 00:12:08.888 "nqn": "nqn.2016-06.io.spdk:cnode4", 00:12:09.149 "subtype": "NVMe", 00:12:09.149 "listen_addresses": [ 00:12:09.149 { 00:12:09.149 "trtype": "TCP", 00:12:09.149 "adrfam": "IPv4", 00:12:09.149 "traddr": "10.0.0.2", 00:12:09.149 "trsvcid": "4420" 00:12:09.149 } 00:12:09.149 ], 00:12:09.149 "allow_any_host": true, 00:12:09.149 "hosts": [], 00:12:09.149 "serial_number": "SPDK00000000000004", 00:12:09.149 "model_number": "SPDK bdev Controller", 00:12:09.149 "max_namespaces": 32, 00:12:09.149 "min_cntlid": 1, 00:12:09.149 "max_cntlid": 65519, 00:12:09.149 "namespaces": [ 00:12:09.149 { 00:12:09.149 "nsid": 1, 00:12:09.149 "bdev_name": "Null4", 00:12:09.149 "name": "Null4", 00:12:09.149 "nguid": "CFC4A5F986344BF78C5856D32A0DB3FC", 00:12:09.149 "uuid": "cfc4a5f9-8634-4bf7-8c58-56d32a0db3fc" 00:12:09.149 } 00:12:09.149 ] 00:12:09.149 } 00:12:09.149 ] 00:12:09.149 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.149 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # seq 1 4 00:12:09.149 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.149 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:12:09.149 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.149 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.149 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.149 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null1 00:12:09.149 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.149 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.149 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.149 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.149 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:12:09.149 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null2 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode3 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null3 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@42 -- # for i in $(seq 1 4) 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@43 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode4 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@44 -- # rpc_cmd bdev_null_delete Null4 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@47 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 10.0.0.2 -s 4430 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # rpc_cmd bdev_get_bdevs 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # jq -r '.[].name' 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@49 -- # check_bdevs= 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@50 -- # '[' -n '' ']' 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@55 -- # trap - SIGINT SIGTERM EXIT 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- target/discovery.sh@57 -- # nvmftestfini 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@117 -- # sync 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@120 -- # set +e 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:09.150 rmmod nvme_tcp 00:12:09.150 rmmod nvme_fabrics 00:12:09.150 rmmod nvme_keyring 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@124 -- # set -e 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@125 -- # return 0 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@489 -- # '[' -n 257623 ']' 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@490 -- # killprocess 257623 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@948 -- # '[' -z 257623 ']' 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@952 -- # kill -0 257623 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # uname 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 257623 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 257623' 00:12:09.150 killing process with pid 257623 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@967 -- # kill 257623 00:12:09.150 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@972 -- # wait 257623 00:12:09.411 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:09.411 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:09.411 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:09.411 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:09.411 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:09.411 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:09.411 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:09.411 11:57:56 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:12.026 00:12:12.026 real 0m9.106s 00:12:12.026 user 0m7.244s 00:12:12.026 sys 0m4.326s 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_target_discovery -- common/autotest_common.sh@10 -- # set +x 00:12:12.026 ************************************ 00:12:12.026 END TEST nvmf_target_discovery 00:12:12.026 ************************************ 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@19 -- # run_test nvmf_referrals /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:12.026 ************************************ 00:12:12.026 START TEST nvmf_referrals 00:12:12.026 ************************************ 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/referrals.sh --transport=tcp 00:12:12.026 * Looking for test storage... 00:12:12.026 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # uname -s 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@5 -- # export PATH 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@47 -- # : 0 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@11 -- # NVMF_REFERRAL_IP_1=127.0.0.2 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@12 -- # NVMF_REFERRAL_IP_2=127.0.0.3 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@13 -- # NVMF_REFERRAL_IP_3=127.0.0.4 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@14 -- # NVMF_PORT_REFERRAL=4430 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@15 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@16 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@37 -- # nvmftestinit 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@285 -- # xtrace_disable 00:12:12.026 11:57:58 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # pci_devs=() 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # net_devs=() 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # e810=() 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@296 -- # local -ga e810 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # x722=() 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@297 -- # local -ga x722 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # mlx=() 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@298 -- # local -ga mlx 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:17.306 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:17.306 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:17.306 Found net devices under 0000:86:00.0: cvl_0_0 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:17.306 Found net devices under 0000:86:00.1: cvl_0_1 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@414 -- # is_hw=yes 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:17.306 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:17.306 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:12:17.306 00:12:17.306 --- 10.0.0.2 ping statistics --- 00:12:17.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.306 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:17.306 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:17.306 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:12:17.306 00:12:17.306 --- 10.0.0.1 ping statistics --- 00:12:17.306 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:17.306 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@422 -- # return 0 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@38 -- # nvmfappstart -m 0xF 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@481 -- # nvmfpid=261382 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@482 -- # waitforlisten 261382 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@829 -- # '[' -z 261382 ']' 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:17.306 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:17.306 11:58:03 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.306 [2024-07-25 11:58:04.035760] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:12:17.306 [2024-07-25 11:58:04.035806] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:17.306 EAL: No free 2048 kB hugepages reported on node 1 00:12:17.306 [2024-07-25 11:58:04.092809] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:17.306 [2024-07-25 11:58:04.172615] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:17.306 [2024-07-25 11:58:04.172654] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:17.306 [2024-07-25 11:58:04.172661] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:17.306 [2024-07-25 11:58:04.172667] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:17.306 [2024-07-25 11:58:04.172672] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:17.306 [2024-07-25 11:58:04.172713] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:17.306 [2024-07-25 11:58:04.172839] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:17.306 [2024-07-25 11:58:04.172924] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:17.306 [2024-07-25 11:58:04.172925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@862 -- # return 0 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@40 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.875 [2024-07-25 11:58:04.897519] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 10.0.0.2 -s 8009 discovery 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.875 [2024-07-25 11:58:04.910899] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@44 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@45 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.3 -s 4430 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@46 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.4 -s 4430 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # jq length 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@48 -- # (( 3 == 3 )) 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # get_referral_ips rpc 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:17.875 11:58:04 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:17.875 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:17.875 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:17.875 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@49 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:17.875 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # get_referral_ips nvme 00:12:17.875 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:17.875 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:17.875 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:17.875 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:17.875 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.3 127.0.0.4 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@50 -- # [[ 127.0.0.2 127.0.0.3 127.0.0.4 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\3\ \1\2\7\.\0\.\0\.\4 ]] 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@52 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@53 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.3 -s 4430 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@54 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.4 -s 4430 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # jq length 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@56 -- # (( 0 == 0 )) 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # get_referral_ips nvme 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:18.135 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@57 -- # [[ '' == '' ]] 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@60 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n discovery 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@62 -- # rpc_cmd nvmf_discovery_add_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # get_referral_ips rpc 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 127.0.0.2 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@65 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # get_referral_ips nvme 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:18.136 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:18.396 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 127.0.0.2 00:12:18.396 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@66 -- # [[ 127.0.0.2 127.0.0.2 == \1\2\7\.\0\.\0\.\2\ \1\2\7\.\0\.\0\.\2 ]] 00:12:18.396 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # get_discovery_entries 'nvme subsystem' 00:12:18.396 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:18.396 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # jq -r .subnqn 00:12:18.396 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.396 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:18.396 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@67 -- # [[ nqn.2016-06.io.spdk:cnode1 == \n\q\n\.\2\0\1\6\-\0\6\.\i\o\.\s\p\d\k\:\c\n\o\d\e\1 ]] 00:12:18.396 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # jq -r .subnqn 00:12:18.396 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # get_discovery_entries 'discovery subsystem referral' 00:12:18.396 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:18.396 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.396 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@68 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@71 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2016-06.io.spdk:cnode1 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # get_referral_ips rpc 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ rpc == \r\p\c ]] 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # jq -r '.[].address.traddr' 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # sort 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@21 -- # echo 127.0.0.2 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@73 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # get_referral_ips nvme 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 127.0.0.2 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@74 -- # [[ 127.0.0.2 == \1\2\7\.\0\.\0\.\2 ]] 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # get_discovery_entries 'nvme subsystem' 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # jq -r .subnqn 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=nvme subsystem' 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.656 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "nvme subsystem")' 00:12:18.916 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@75 -- # [[ '' == '' ]] 00:12:18.916 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # get_discovery_entries 'discovery subsystem referral' 00:12:18.916 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # jq -r .subnqn 00:12:18.916 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@31 -- # local 'subtype=discovery subsystem referral' 00:12:18.916 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@34 -- # jq '.records[] | select(.subtype == "discovery subsystem referral")' 00:12:18.916 11:58:05 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@33 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.916 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@76 -- # [[ nqn.2014-08.org.nvmexpress.discovery == \n\q\n\.\2\0\1\4\-\0\8\.\o\r\g\.\n\v\m\e\x\p\r\e\s\s\.\d\i\s\c\o\v\e\r\y ]] 00:12:18.916 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@79 -- # rpc_cmd nvmf_discovery_remove_referral -t tcp -a 127.0.0.2 -s 4430 -n nqn.2014-08.org.nvmexpress.discovery 00:12:18.916 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.916 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.916 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.916 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # rpc_cmd nvmf_discovery_get_referrals 00:12:18.916 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # jq length 00:12:18.916 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:18.916 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:18.916 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:18.916 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@82 -- # (( 0 == 0 )) 00:12:18.916 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # get_referral_ips nvme 00:12:18.916 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@19 -- # [[ nvme == \r\p\c ]] 00:12:18.916 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@22 -- # [[ nvme == \n\v\m\e ]] 00:12:18.916 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 8009 -o json 00:12:18.916 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # jq -r '.records[] | select(.subtype != "current discovery subsystem").traddr' 00:12:18.916 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # sort 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@26 -- # echo 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@83 -- # [[ '' == '' ]] 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@85 -- # trap - SIGINT SIGTERM EXIT 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- target/referrals.sh@86 -- # nvmftestfini 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@117 -- # sync 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@120 -- # set +e 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:19.176 rmmod nvme_tcp 00:12:19.176 rmmod nvme_fabrics 00:12:19.176 rmmod nvme_keyring 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@124 -- # set -e 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@125 -- # return 0 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@489 -- # '[' -n 261382 ']' 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@490 -- # killprocess 261382 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@948 -- # '[' -z 261382 ']' 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@952 -- # kill -0 261382 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # uname 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 261382 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@966 -- # echo 'killing process with pid 261382' 00:12:19.176 killing process with pid 261382 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@967 -- # kill 261382 00:12:19.176 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@972 -- # wait 261382 00:12:19.435 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:19.435 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:19.436 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:19.436 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:19.436 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:19.436 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:19.436 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:19.436 11:58:06 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:21.978 00:12:21.978 real 0m9.946s 00:12:21.978 user 0m11.754s 00:12:21.978 sys 0m4.440s 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_referrals -- common/autotest_common.sh@10 -- # set +x 00:12:21.978 ************************************ 00:12:21.978 END TEST nvmf_referrals 00:12:21.978 ************************************ 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@20 -- # run_test nvmf_connect_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:21.978 ************************************ 00:12:21.978 START TEST nvmf_connect_disconnect 00:12:21.978 ************************************ 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_disconnect.sh --transport=tcp 00:12:21.978 * Looking for test storage... 00:12:21.978 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # uname -s 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@5 -- # export PATH 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@47 -- # : 0 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@11 -- # MALLOC_BDEV_SIZE=64 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@15 -- # nvmftestinit 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:12:21.978 11:58:08 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # e810=() 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # x722=() 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:27.297 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:27.297 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:27.297 Found net devices under 0000:86:00.0: cvl_0_0 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:27.297 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:27.297 Found net devices under 0000:86:00.1: cvl_0_1 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:27.298 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:27.298 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:12:27.298 00:12:27.298 --- 10.0.0.2 ping statistics --- 00:12:27.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.298 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:27.298 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:27.298 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.263 ms 00:12:27.298 00:12:27.298 --- 10.0.0.1 ping statistics --- 00:12:27.298 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:27.298 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@422 -- # return 0 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@16 -- # nvmfappstart -m 0xF 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@481 -- # nvmfpid=265268 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@482 -- # waitforlisten 265268 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@829 -- # '[' -z 265268 ']' 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:27.298 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:27.298 11:58:14 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:27.298 [2024-07-25 11:58:14.478319] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:12:27.298 [2024-07-25 11:58:14.478362] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:27.298 EAL: No free 2048 kB hugepages reported on node 1 00:12:27.298 [2024-07-25 11:58:14.537467] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:27.559 [2024-07-25 11:58:14.618593] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:27.559 [2024-07-25 11:58:14.618631] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:27.559 [2024-07-25 11:58:14.618638] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:27.559 [2024-07-25 11:58:14.618644] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:27.559 [2024-07-25 11:58:14.618649] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:27.559 [2024-07-25 11:58:14.618693] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:27.559 [2024-07-25 11:58:14.618787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:27.559 [2024-07-25 11:58:14.618856] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:27.559 [2024-07-25 11:58:14.618857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:28.130 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:28.130 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@862 -- # return 0 00:12:28.130 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:28.130 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:28.130 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:28.130 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:28.130 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 -c 0 00:12:28.130 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.130 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:28.130 [2024-07-25 11:58:15.332502] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:12:28.130 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.130 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # rpc_cmd bdev_malloc_create 64 512 00:12:28.131 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.131 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:28.131 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.131 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@20 -- # bdev=Malloc0 00:12:28.131 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:12:28.131 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.131 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:28.131 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.131 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:12:28.131 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.131 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:28.131 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.390 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:12:28.390 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@559 -- # xtrace_disable 00:12:28.390 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:28.390 [2024-07-25 11:58:15.384401] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:12:28.390 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:12:28.390 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@26 -- # '[' 0 -eq 1 ']' 00:12:28.390 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@31 -- # num_iterations=5 00:12:28.390 11:58:15 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@34 -- # set +x 00:12:31.685 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:34.978 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:38.284 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:41.605 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.902 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@43 -- # trap - SIGINT SIGTERM EXIT 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- target/connect_disconnect.sh@45 -- # nvmftestfini 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@117 -- # sync 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@120 -- # set +e 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:44.902 rmmod nvme_tcp 00:12:44.902 rmmod nvme_fabrics 00:12:44.902 rmmod nvme_keyring 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@124 -- # set -e 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@125 -- # return 0 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@489 -- # '[' -n 265268 ']' 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@490 -- # killprocess 265268 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@948 -- # '[' -z 265268 ']' 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@952 -- # kill -0 265268 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # uname 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 265268 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 265268' 00:12:44.902 killing process with pid 265268 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@967 -- # kill 265268 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@972 -- # wait 265268 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:44.902 11:58:31 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:46.820 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:46.820 00:12:46.820 real 0m25.358s 00:12:46.820 user 1m10.617s 00:12:46.820 sys 0m5.254s 00:12:46.820 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:46.820 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_disconnect -- common/autotest_common.sh@10 -- # set +x 00:12:46.820 ************************************ 00:12:46.820 END TEST nvmf_connect_disconnect 00:12:46.820 ************************************ 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@21 -- # run_test nvmf_multitarget /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:47.080 ************************************ 00:12:47.080 START TEST nvmf_multitarget 00:12:47.080 ************************************ 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget.sh --transport=tcp 00:12:47.080 * Looking for test storage... 00:12:47.080 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # uname -s 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@5 -- # export PATH 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@47 -- # : 0 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:47.080 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:47.081 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:47.081 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:47.081 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:47.081 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:47.081 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@13 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:12:47.081 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@15 -- # nvmftestinit 00:12:47.081 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:47.081 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:47.081 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:47.081 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:47.081 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:47.081 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:47.081 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:47.081 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:47.081 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:47.081 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:47.081 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@285 -- # xtrace_disable 00:12:47.081 11:58:34 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.356 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:12:52.356 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # pci_devs=() 00:12:52.356 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@291 -- # local -a pci_devs 00:12:52.356 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # pci_net_devs=() 00:12:52.356 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:12:52.356 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # pci_drivers=() 00:12:52.356 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@293 -- # local -A pci_drivers 00:12:52.356 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # net_devs=() 00:12:52.356 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@295 -- # local -ga net_devs 00:12:52.356 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # e810=() 00:12:52.356 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@296 -- # local -ga e810 00:12:52.356 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # x722=() 00:12:52.356 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@297 -- # local -ga x722 00:12:52.356 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # mlx=() 00:12:52.356 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@298 -- # local -ga mlx 00:12:52.356 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:12:52.356 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:12:52.357 Found 0000:86:00.0 (0x8086 - 0x159b) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:12:52.357 Found 0000:86:00.1 (0x8086 - 0x159b) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:12:52.357 Found net devices under 0000:86:00.0: cvl_0_0 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@390 -- # [[ up == up ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:12:52.357 Found net devices under 0000:86:00.1: cvl_0_1 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@414 -- # is_hw=yes 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:12:52.357 11:58:38 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:12:52.357 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:12:52.357 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.192 ms 00:12:52.357 00:12:52.357 --- 10.0.0.2 ping statistics --- 00:12:52.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.357 rtt min/avg/max/mdev = 0.192/0.192/0.192/0.000 ms 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:12:52.357 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:12:52.357 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.417 ms 00:12:52.357 00:12:52.357 --- 10.0.0.1 ping statistics --- 00:12:52.357 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:12:52.357 rtt min/avg/max/mdev = 0.417/0.417/0.417/0.000 ms 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@422 -- # return 0 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@16 -- # nvmfappstart -m 0xF 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@722 -- # xtrace_disable 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@481 -- # nvmfpid=271605 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@482 -- # waitforlisten 271605 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@829 -- # '[' -z 271605 ']' 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@834 -- # local max_retries=100 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:52.357 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@838 -- # xtrace_disable 00:12:52.357 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.357 [2024-07-25 11:58:39.148397] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:12:52.357 [2024-07-25 11:58:39.148443] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:12:52.357 EAL: No free 2048 kB hugepages reported on node 1 00:12:52.357 [2024-07-25 11:58:39.208681] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:12:52.357 [2024-07-25 11:58:39.289733] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:12:52.357 [2024-07-25 11:58:39.289769] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:12:52.357 [2024-07-25 11:58:39.289776] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:12:52.357 [2024-07-25 11:58:39.289782] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:12:52.357 [2024-07-25 11:58:39.289787] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:12:52.357 [2024-07-25 11:58:39.289820] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:12:52.357 [2024-07-25 11:58:39.289925] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:12:52.357 [2024-07-25 11:58:39.289953] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:12:52.357 [2024-07-25 11:58:39.289955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:12:52.926 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:12:52.926 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@862 -- # return 0 00:12:52.926 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:12:52.926 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@728 -- # xtrace_disable 00:12:52.926 11:58:39 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:52.926 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:12:52.926 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@18 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:12:52.927 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:52.927 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # jq length 00:12:52.927 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@21 -- # '[' 1 '!=' 1 ']' 00:12:52.927 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_1 -s 32 00:12:53.186 "nvmf_tgt_1" 00:12:53.186 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_create_target -n nvmf_tgt_2 -s 32 00:12:53.186 "nvmf_tgt_2" 00:12:53.186 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # jq length 00:12:53.186 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:53.186 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@28 -- # '[' 3 '!=' 3 ']' 00:12:53.186 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_1 00:12:53.445 true 00:12:53.445 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_delete_target -n nvmf_tgt_2 00:12:53.445 true 00:12:53.445 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py nvmf_get_targets 00:12:53.445 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # jq length 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@35 -- # '[' 1 '!=' 1 ']' 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- target/multitarget.sh@41 -- # nvmftestfini 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@488 -- # nvmfcleanup 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@117 -- # sync 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@120 -- # set +e 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@121 -- # for i in {1..20} 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:12:53.705 rmmod nvme_tcp 00:12:53.705 rmmod nvme_fabrics 00:12:53.705 rmmod nvme_keyring 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@124 -- # set -e 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@125 -- # return 0 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@489 -- # '[' -n 271605 ']' 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@490 -- # killprocess 271605 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@948 -- # '[' -z 271605 ']' 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@952 -- # kill -0 271605 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # uname 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 271605 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@966 -- # echo 'killing process with pid 271605' 00:12:53.705 killing process with pid 271605 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@967 -- # kill 271605 00:12:53.705 11:58:40 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@972 -- # wait 271605 00:12:53.964 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:12:53.964 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:12:53.964 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:12:53.964 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:12:53.964 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@278 -- # remove_spdk_ns 00:12:53.964 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:53.964 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:53.964 11:58:41 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:55.873 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:12:55.873 00:12:55.873 real 0m8.979s 00:12:55.873 user 0m8.837s 00:12:55.873 sys 0m4.183s 00:12:55.873 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@1124 -- # xtrace_disable 00:12:55.873 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_multitarget -- common/autotest_common.sh@10 -- # set +x 00:12:55.873 ************************************ 00:12:55.873 END TEST nvmf_multitarget 00:12:55.873 ************************************ 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@22 -- # run_test nvmf_rpc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:12:56.134 ************************************ 00:12:56.134 START TEST nvmf_rpc 00:12:56.134 ************************************ 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.sh --transport=tcp 00:12:56.134 * Looking for test storage... 00:12:56.134 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # uname -s 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@5 -- # export PATH 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@47 -- # : 0 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@11 -- # loops=5 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@23 -- # nvmftestinit 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@448 -- # prepare_net_devs 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@285 -- # xtrace_disable 00:12:56.134 11:58:43 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # pci_devs=() 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # net_devs=() 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # e810=() 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@296 -- # local -ga e810 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # x722=() 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@297 -- # local -ga x722 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # mlx=() 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@298 -- # local -ga mlx 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:01.412 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:01.413 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:01.413 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:01.413 Found net devices under 0000:86:00.0: cvl_0_0 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:01.413 Found net devices under 0000:86:00.1: cvl_0_1 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@414 -- # is_hw=yes 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:01.413 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:01.413 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.261 ms 00:13:01.413 00:13:01.413 --- 10.0.0.2 ping statistics --- 00:13:01.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.413 rtt min/avg/max/mdev = 0.261/0.261/0.261/0.000 ms 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:01.413 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:01.413 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:13:01.413 00:13:01.413 --- 10.0.0.1 ping statistics --- 00:13:01.413 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:01.413 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@422 -- # return 0 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:01.413 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:01.414 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:01.414 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:01.674 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@24 -- # nvmfappstart -m 0xF 00:13:01.674 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:01.674 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:01.674 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.674 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@481 -- # nvmfpid=275378 00:13:01.674 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@482 -- # waitforlisten 275378 00:13:01.674 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:01.674 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@829 -- # '[' -z 275378 ']' 00:13:01.674 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:01.674 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:01.674 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:01.674 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:01.674 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:01.674 11:58:48 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.674 [2024-07-25 11:58:48.725912] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:13:01.674 [2024-07-25 11:58:48.725956] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:01.674 EAL: No free 2048 kB hugepages reported on node 1 00:13:01.674 [2024-07-25 11:58:48.783967] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:01.674 [2024-07-25 11:58:48.865418] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:01.674 [2024-07-25 11:58:48.865454] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:01.674 [2024-07-25 11:58:48.865460] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:01.674 [2024-07-25 11:58:48.865467] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:01.674 [2024-07-25 11:58:48.865472] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:01.674 [2024-07-25 11:58:48.865512] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:01.674 [2024-07-25 11:58:48.865604] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:01.674 [2024-07-25 11:58:48.865690] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:01.674 [2024-07-25 11:58:48.865691] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:02.614 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:02.614 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@862 -- # return 0 00:13:02.614 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:02.614 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:02.614 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.614 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:02.614 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # rpc_cmd nvmf_get_stats 00:13:02.614 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@26 -- # stats='{ 00:13:02.615 "tick_rate": 2300000000, 00:13:02.615 "poll_groups": [ 00:13:02.615 { 00:13:02.615 "name": "nvmf_tgt_poll_group_000", 00:13:02.615 "admin_qpairs": 0, 00:13:02.615 "io_qpairs": 0, 00:13:02.615 "current_admin_qpairs": 0, 00:13:02.615 "current_io_qpairs": 0, 00:13:02.615 "pending_bdev_io": 0, 00:13:02.615 "completed_nvme_io": 0, 00:13:02.615 "transports": [] 00:13:02.615 }, 00:13:02.615 { 00:13:02.615 "name": "nvmf_tgt_poll_group_001", 00:13:02.615 "admin_qpairs": 0, 00:13:02.615 "io_qpairs": 0, 00:13:02.615 "current_admin_qpairs": 0, 00:13:02.615 "current_io_qpairs": 0, 00:13:02.615 "pending_bdev_io": 0, 00:13:02.615 "completed_nvme_io": 0, 00:13:02.615 "transports": [] 00:13:02.615 }, 00:13:02.615 { 00:13:02.615 "name": "nvmf_tgt_poll_group_002", 00:13:02.615 "admin_qpairs": 0, 00:13:02.615 "io_qpairs": 0, 00:13:02.615 "current_admin_qpairs": 0, 00:13:02.615 "current_io_qpairs": 0, 00:13:02.615 "pending_bdev_io": 0, 00:13:02.615 "completed_nvme_io": 0, 00:13:02.615 "transports": [] 00:13:02.615 }, 00:13:02.615 { 00:13:02.615 "name": "nvmf_tgt_poll_group_003", 00:13:02.615 "admin_qpairs": 0, 00:13:02.615 "io_qpairs": 0, 00:13:02.615 "current_admin_qpairs": 0, 00:13:02.615 "current_io_qpairs": 0, 00:13:02.615 "pending_bdev_io": 0, 00:13:02.615 "completed_nvme_io": 0, 00:13:02.615 "transports": [] 00:13:02.615 } 00:13:02.615 ] 00:13:02.615 }' 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # jcount '.poll_groups[].name' 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@14 -- # local 'filter=.poll_groups[].name' 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # jq '.poll_groups[].name' 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@15 -- # wc -l 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@28 -- # (( 4 == 4 )) 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # jq '.poll_groups[0].transports[0]' 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@29 -- # [[ null == null ]] 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@31 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.615 [2024-07-25 11:58:49.687872] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # rpc_cmd nvmf_get_stats 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@33 -- # stats='{ 00:13:02.615 "tick_rate": 2300000000, 00:13:02.615 "poll_groups": [ 00:13:02.615 { 00:13:02.615 "name": "nvmf_tgt_poll_group_000", 00:13:02.615 "admin_qpairs": 0, 00:13:02.615 "io_qpairs": 0, 00:13:02.615 "current_admin_qpairs": 0, 00:13:02.615 "current_io_qpairs": 0, 00:13:02.615 "pending_bdev_io": 0, 00:13:02.615 "completed_nvme_io": 0, 00:13:02.615 "transports": [ 00:13:02.615 { 00:13:02.615 "trtype": "TCP" 00:13:02.615 } 00:13:02.615 ] 00:13:02.615 }, 00:13:02.615 { 00:13:02.615 "name": "nvmf_tgt_poll_group_001", 00:13:02.615 "admin_qpairs": 0, 00:13:02.615 "io_qpairs": 0, 00:13:02.615 "current_admin_qpairs": 0, 00:13:02.615 "current_io_qpairs": 0, 00:13:02.615 "pending_bdev_io": 0, 00:13:02.615 "completed_nvme_io": 0, 00:13:02.615 "transports": [ 00:13:02.615 { 00:13:02.615 "trtype": "TCP" 00:13:02.615 } 00:13:02.615 ] 00:13:02.615 }, 00:13:02.615 { 00:13:02.615 "name": "nvmf_tgt_poll_group_002", 00:13:02.615 "admin_qpairs": 0, 00:13:02.615 "io_qpairs": 0, 00:13:02.615 "current_admin_qpairs": 0, 00:13:02.615 "current_io_qpairs": 0, 00:13:02.615 "pending_bdev_io": 0, 00:13:02.615 "completed_nvme_io": 0, 00:13:02.615 "transports": [ 00:13:02.615 { 00:13:02.615 "trtype": "TCP" 00:13:02.615 } 00:13:02.615 ] 00:13:02.615 }, 00:13:02.615 { 00:13:02.615 "name": "nvmf_tgt_poll_group_003", 00:13:02.615 "admin_qpairs": 0, 00:13:02.615 "io_qpairs": 0, 00:13:02.615 "current_admin_qpairs": 0, 00:13:02.615 "current_io_qpairs": 0, 00:13:02.615 "pending_bdev_io": 0, 00:13:02.615 "completed_nvme_io": 0, 00:13:02.615 "transports": [ 00:13:02.615 { 00:13:02.615 "trtype": "TCP" 00:13:02.615 } 00:13:02.615 ] 00:13:02.615 } 00:13:02.615 ] 00:13:02.615 }' 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # jsum '.poll_groups[].admin_qpairs' 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@35 -- # (( 0 == 0 )) 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # jsum '.poll_groups[].io_qpairs' 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@36 -- # (( 0 == 0 )) 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@38 -- # '[' rdma == tcp ']' 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@46 -- # MALLOC_BDEV_SIZE=64 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@47 -- # MALLOC_BLOCK_SIZE=512 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@49 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.615 Malloc1 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@52 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@53 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@54 -- # rpc_cmd nvmf_subsystem_allow_any_host -d nqn.2016-06.io.spdk:cnode1 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@55 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.615 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.615 [2024-07-25 11:58:49.859823] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@58 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.2 -s 4420 00:13:02.875 [2024-07-25 11:58:49.884629] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:13:02.875 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:02.875 could not add new controller: failed to write to nvme-fabrics device 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@61 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:02.875 11:58:49 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@62 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:03.814 11:58:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@63 -- # waitforserial SPDKISFASTANDAWESOME 00:13:03.814 11:58:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:03.814 11:58:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:03.814 11:58:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:03.814 11:58:50 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:06.354 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:06.354 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:06.354 11:58:52 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@64 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:06.354 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@65 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@68 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2016-06.io.spdk:cnode1 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@69 -- # NOT nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@648 -- # local es=0 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@650 -- # valid_exec_arg nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@636 -- # local arg=nvme 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # type -t nvme 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # type -P nvme 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # arg=/usr/sbin/nvme 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@642 -- # [[ -x /usr/sbin/nvme ]] 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:06.354 [2024-07-25 11:58:53.125977] ctrlr.c: 822:nvmf_qpair_access_allowed: *ERROR*: Subsystem 'nqn.2016-06.io.spdk:cnode1' does not allow host 'nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562' 00:13:06.354 Failed to write to /dev/nvme-fabrics: Input/output error 00:13:06.354 could not add new controller: failed to write to nvme-fabrics device 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@651 -- # es=1 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@72 -- # rpc_cmd nvmf_subsystem_allow_any_host -e nqn.2016-06.io.spdk:cnode1 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:06.354 11:58:53 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@73 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:07.294 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@74 -- # waitforserial SPDKISFASTANDAWESOME 00:13:07.294 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:07.294 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:07.294 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:07.294 11:58:54 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@75 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:09.253 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@76 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@78 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # seq 1 5 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.253 [2024-07-25 11:58:56.404669] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:09.253 11:58:56 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:10.633 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:10.633 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:10.633 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:10.633 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:10.633 11:58:57 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:12.542 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.542 [2024-07-25 11:58:59.651161] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:12.542 11:58:59 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:13.921 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:13.921 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:13.921 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:13.921 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:13.921 11:59:00 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:15.828 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:15.828 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:15.828 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:15.828 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:15.828 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:15.828 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:15.828 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:15.828 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:15.828 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:15.828 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:15.828 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:15.828 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.828 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:15.828 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:15.828 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:15.828 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:15.828 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.828 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.828 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.829 [2024-07-25 11:59:02.950285] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:15.829 11:59:02 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:17.208 11:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:17.208 11:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:17.208 11:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:17.208 11:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:17.208 11:59:04 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:19.119 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.119 [2024-07-25 11:59:06.276440] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:19.119 11:59:06 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:20.501 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:20.501 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:20.501 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:20.501 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:20.501 11:59:07 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:22.412 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@81 -- # for i in $(seq 1 $loops) 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@82 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@83 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.412 [2024-07-25 11:59:09.567422] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@84 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 5 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@85 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:22.412 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:22.413 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:22.413 11:59:09 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@86 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:13:23.793 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@88 -- # waitforserial SPDKISFASTANDAWESOME 00:13:23.793 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1198 -- # local i=0 00:13:23.793 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:13:23.793 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:13:23.793 11:59:10 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1205 -- # sleep 2 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1208 -- # return 0 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@90 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:13:25.703 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@91 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1219 -- # local i=0 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1231 -- # return 0 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@93 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 5 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@94 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # seq 1 5 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.703 [2024-07-25 11:59:12.876749] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.703 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.704 [2024-07-25 11:59:12.924881] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.704 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 [2024-07-25 11:59:12.977036] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:12 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 [2024-07-25 11:59:13.025186] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@99 -- # for i in $(seq 1 $loops) 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@100 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDKISFASTANDAWESOME 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@101 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 [2024-07-25 11:59:13.073353] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@102 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@103 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode1 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@105 -- # rpc_cmd nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@107 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # rpc_cmd nvmf_get_stats 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:25.965 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@110 -- # stats='{ 00:13:25.965 "tick_rate": 2300000000, 00:13:25.965 "poll_groups": [ 00:13:25.965 { 00:13:25.965 "name": "nvmf_tgt_poll_group_000", 00:13:25.965 "admin_qpairs": 2, 00:13:25.965 "io_qpairs": 168, 00:13:25.965 "current_admin_qpairs": 0, 00:13:25.965 "current_io_qpairs": 0, 00:13:25.965 "pending_bdev_io": 0, 00:13:25.965 "completed_nvme_io": 268, 00:13:25.965 "transports": [ 00:13:25.965 { 00:13:25.965 "trtype": "TCP" 00:13:25.965 } 00:13:25.965 ] 00:13:25.965 }, 00:13:25.965 { 00:13:25.965 "name": "nvmf_tgt_poll_group_001", 00:13:25.965 "admin_qpairs": 2, 00:13:25.965 "io_qpairs": 168, 00:13:25.965 "current_admin_qpairs": 0, 00:13:25.965 "current_io_qpairs": 0, 00:13:25.965 "pending_bdev_io": 0, 00:13:25.965 "completed_nvme_io": 268, 00:13:25.965 "transports": [ 00:13:25.965 { 00:13:25.965 "trtype": "TCP" 00:13:25.965 } 00:13:25.965 ] 00:13:25.965 }, 00:13:25.965 { 00:13:25.965 "name": "nvmf_tgt_poll_group_002", 00:13:25.965 "admin_qpairs": 1, 00:13:25.965 "io_qpairs": 168, 00:13:25.965 "current_admin_qpairs": 0, 00:13:25.965 "current_io_qpairs": 0, 00:13:25.965 "pending_bdev_io": 0, 00:13:25.965 "completed_nvme_io": 267, 00:13:25.965 "transports": [ 00:13:25.965 { 00:13:25.965 "trtype": "TCP" 00:13:25.965 } 00:13:25.965 ] 00:13:25.965 }, 00:13:25.965 { 00:13:25.965 "name": "nvmf_tgt_poll_group_003", 00:13:25.965 "admin_qpairs": 2, 00:13:25.966 "io_qpairs": 168, 00:13:25.966 "current_admin_qpairs": 0, 00:13:25.966 "current_io_qpairs": 0, 00:13:25.966 "pending_bdev_io": 0, 00:13:25.966 "completed_nvme_io": 219, 00:13:25.966 "transports": [ 00:13:25.966 { 00:13:25.966 "trtype": "TCP" 00:13:25.966 } 00:13:25.966 ] 00:13:25.966 } 00:13:25.966 ] 00:13:25.966 }' 00:13:25.966 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # jsum '.poll_groups[].admin_qpairs' 00:13:25.966 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].admin_qpairs' 00:13:25.966 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].admin_qpairs' 00:13:25.966 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:25.966 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@112 -- # (( 7 > 0 )) 00:13:25.966 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # jsum '.poll_groups[].io_qpairs' 00:13:25.966 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@19 -- # local 'filter=.poll_groups[].io_qpairs' 00:13:25.966 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # jq '.poll_groups[].io_qpairs' 00:13:25.966 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@20 -- # awk '{s+=$1}END{print s}' 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@113 -- # (( 672 > 0 )) 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@115 -- # '[' rdma == tcp ']' 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@121 -- # trap - SIGINT SIGTERM EXIT 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- target/rpc.sh@123 -- # nvmftestfini 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@117 -- # sync 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@120 -- # set +e 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:26.227 rmmod nvme_tcp 00:13:26.227 rmmod nvme_fabrics 00:13:26.227 rmmod nvme_keyring 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@124 -- # set -e 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@125 -- # return 0 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@489 -- # '[' -n 275378 ']' 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@490 -- # killprocess 275378 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@948 -- # '[' -z 275378 ']' 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@952 -- # kill -0 275378 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # uname 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 275378 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 275378' 00:13:26.227 killing process with pid 275378 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@967 -- # kill 275378 00:13:26.227 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@972 -- # wait 275378 00:13:26.487 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:26.487 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:26.487 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:26.487 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:26.487 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:26.487 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:26.487 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:26.487 11:59:13 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.392 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:28.392 00:13:28.392 real 0m32.439s 00:13:28.392 user 1m39.956s 00:13:28.392 sys 0m5.593s 00:13:28.392 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:28.392 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.392 ************************************ 00:13:28.392 END TEST nvmf_rpc 00:13:28.392 ************************************ 00:13:28.392 11:59:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:13:28.392 11:59:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@23 -- # run_test nvmf_invalid /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:28.392 11:59:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:28.392 11:59:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:28.392 11:59:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:28.651 ************************************ 00:13:28.651 START TEST nvmf_invalid 00:13:28.651 ************************************ 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/invalid.sh --transport=tcp 00:13:28.651 * Looking for test storage... 00:13:28.651 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # uname -s 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@5 -- # export PATH 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@47 -- # : 0 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@11 -- # multi_target_rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/multitarget_rpc.py 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@12 -- # rpc=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@13 -- # nqn=nqn.2016-06.io.spdk:cnode 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@14 -- # target=foobar 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@16 -- # RANDOM=0 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@34 -- # nvmftestinit 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@285 -- # xtrace_disable 00:13:28.651 11:59:15 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # pci_devs=() 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # net_devs=() 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # e810=() 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@296 -- # local -ga e810 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # x722=() 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@297 -- # local -ga x722 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # mlx=() 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@298 -- # local -ga mlx 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:33.956 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:33.956 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:33.956 Found net devices under 0000:86:00.0: cvl_0_0 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:33.956 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:33.957 Found net devices under 0000:86:00.1: cvl_0_1 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@414 -- # is_hw=yes 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:33.957 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:33.957 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.160 ms 00:13:33.957 00:13:33.957 --- 10.0.0.2 ping statistics --- 00:13:33.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.957 rtt min/avg/max/mdev = 0.160/0.160/0.160/0.000 ms 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:33.957 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:33.957 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.282 ms 00:13:33.957 00:13:33.957 --- 10.0.0.1 ping statistics --- 00:13:33.957 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:33.957 rtt min/avg/max/mdev = 0.282/0.282/0.282/0.000 ms 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@422 -- # return 0 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@35 -- # nvmfappstart -m 0xF 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@481 -- # nvmfpid=282966 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@482 -- # waitforlisten 282966 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@829 -- # '[' -z 282966 ']' 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:33.957 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:33.957 11:59:20 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:33.957 [2024-07-25 11:59:21.044839] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:13:33.957 [2024-07-25 11:59:21.044880] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:33.957 EAL: No free 2048 kB hugepages reported on node 1 00:13:33.957 [2024-07-25 11:59:21.101230] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:13:33.957 [2024-07-25 11:59:21.179697] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:33.957 [2024-07-25 11:59:21.179734] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:33.957 [2024-07-25 11:59:21.179744] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:33.957 [2024-07-25 11:59:21.179749] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:33.957 [2024-07-25 11:59:21.179754] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:33.957 [2024-07-25 11:59:21.179796] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:33.957 [2024-07-25 11:59:21.179892] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:33.957 [2024-07-25 11:59:21.179958] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:33.957 [2024-07-25 11:59:21.179960] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:13:34.894 11:59:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:34.894 11:59:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@862 -- # return 0 00:13:34.894 11:59:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:34.894 11:59:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:34.894 11:59:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:34.894 11:59:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:34.894 11:59:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@37 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini $1; exit 1' SIGINT SIGTERM EXIT 00:13:34.894 11:59:21 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -t foobar nqn.2016-06.io.spdk:cnode23643 00:13:34.894 [2024-07-25 11:59:22.039691] nvmf_rpc.c: 396:rpc_nvmf_create_subsystem: *ERROR*: Unable to find target foobar 00:13:34.894 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@40 -- # out='request: 00:13:34.894 { 00:13:34.894 "nqn": "nqn.2016-06.io.spdk:cnode23643", 00:13:34.894 "tgt_name": "foobar", 00:13:34.894 "method": "nvmf_create_subsystem", 00:13:34.894 "req_id": 1 00:13:34.894 } 00:13:34.894 Got JSON-RPC error response 00:13:34.894 response: 00:13:34.894 { 00:13:34.894 "code": -32603, 00:13:34.894 "message": "Unable to find target foobar" 00:13:34.894 }' 00:13:34.894 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@41 -- # [[ request: 00:13:34.894 { 00:13:34.894 "nqn": "nqn.2016-06.io.spdk:cnode23643", 00:13:34.894 "tgt_name": "foobar", 00:13:34.894 "method": "nvmf_create_subsystem", 00:13:34.894 "req_id": 1 00:13:34.894 } 00:13:34.894 Got JSON-RPC error response 00:13:34.894 response: 00:13:34.894 { 00:13:34.894 "code": -32603, 00:13:34.895 "message": "Unable to find target foobar" 00:13:34.895 } == *\U\n\a\b\l\e\ \t\o\ \f\i\n\d\ \t\a\r\g\e\t* ]] 00:13:34.895 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # echo -e '\x1f' 00:13:34.895 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s $'SPDKISFASTANDAWESOME\037' nqn.2016-06.io.spdk:cnode318 00:13:35.154 [2024-07-25 11:59:22.228376] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode318: invalid serial number 'SPDKISFASTANDAWESOME' 00:13:35.154 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@45 -- # out='request: 00:13:35.154 { 00:13:35.154 "nqn": "nqn.2016-06.io.spdk:cnode318", 00:13:35.154 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:35.154 "method": "nvmf_create_subsystem", 00:13:35.154 "req_id": 1 00:13:35.154 } 00:13:35.154 Got JSON-RPC error response 00:13:35.154 response: 00:13:35.154 { 00:13:35.154 "code": -32602, 00:13:35.154 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:35.154 }' 00:13:35.154 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@46 -- # [[ request: 00:13:35.154 { 00:13:35.154 "nqn": "nqn.2016-06.io.spdk:cnode318", 00:13:35.154 "serial_number": "SPDKISFASTANDAWESOME\u001f", 00:13:35.154 "method": "nvmf_create_subsystem", 00:13:35.154 "req_id": 1 00:13:35.154 } 00:13:35.154 Got JSON-RPC error response 00:13:35.154 response: 00:13:35.154 { 00:13:35.154 "code": -32602, 00:13:35.154 "message": "Invalid SN SPDKISFASTANDAWESOME\u001f" 00:13:35.154 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:35.154 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # echo -e '\x1f' 00:13:35.154 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -d $'SPDK_Controller\037' nqn.2016-06.io.spdk:cnode1198 00:13:35.414 [2024-07-25 11:59:22.445075] nvmf_rpc.c: 422:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode1198: invalid model number 'SPDK_Controller' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@50 -- # out='request: 00:13:35.414 { 00:13:35.414 "nqn": "nqn.2016-06.io.spdk:cnode1198", 00:13:35.414 "model_number": "SPDK_Controller\u001f", 00:13:35.414 "method": "nvmf_create_subsystem", 00:13:35.414 "req_id": 1 00:13:35.414 } 00:13:35.414 Got JSON-RPC error response 00:13:35.414 response: 00:13:35.414 { 00:13:35.414 "code": -32602, 00:13:35.414 "message": "Invalid MN SPDK_Controller\u001f" 00:13:35.414 }' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@51 -- # [[ request: 00:13:35.414 { 00:13:35.414 "nqn": "nqn.2016-06.io.spdk:cnode1198", 00:13:35.414 "model_number": "SPDK_Controller\u001f", 00:13:35.414 "method": "nvmf_create_subsystem", 00:13:35.414 "req_id": 1 00:13:35.414 } 00:13:35.414 Got JSON-RPC error response 00:13:35.414 response: 00:13:35.414 { 00:13:35.414 "code": -32602, 00:13:35.414 "message": "Invalid MN SPDK_Controller\u001f" 00:13:35.414 } == *\I\n\v\a\l\i\d\ \M\N* ]] 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # gen_random_s 21 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=21 ll 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 115 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x73' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=s 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 65 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x41' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=A 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 118 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x76' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=v 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 66 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x42' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=B 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 70 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x46' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=F 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 83 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x53' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=S 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 53 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x35' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=5 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 94 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5e' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='^' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.414 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 39 00:13:35.415 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x27' 00:13:35.415 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=\' 00:13:35.415 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.415 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.415 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:35.415 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:35.415 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:35.415 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.415 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.415 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 42 00:13:35.415 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2a' 00:13:35.415 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='*' 00:13:35.415 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.415 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.415 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ l == \- ]] 00:13:35.415 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo 'lsAvB3FdD DS5^l!qu'\''Q*' 00:13:35.415 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem -s 'lsAvB3FdD DS5^l!qu'\''Q*' nqn.2016-06.io.spdk:cnode4203 00:13:35.675 [2024-07-25 11:59:22.762141] nvmf_rpc.c: 413:rpc_nvmf_create_subsystem: *ERROR*: Subsystem nqn.2016-06.io.spdk:cnode4203: invalid serial number 'lsAvB3FdD DS5^l!qu'Q*' 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@54 -- # out='request: 00:13:35.675 { 00:13:35.675 "nqn": "nqn.2016-06.io.spdk:cnode4203", 00:13:35.675 "serial_number": "lsAvB3FdD DS5^l!qu'\''Q*", 00:13:35.675 "method": "nvmf_create_subsystem", 00:13:35.675 "req_id": 1 00:13:35.675 } 00:13:35.675 Got JSON-RPC error response 00:13:35.675 response: 00:13:35.675 { 00:13:35.675 "code": -32602, 00:13:35.675 "message": "Invalid SN lsAvB3FdD DS5^l!qu'\''Q*" 00:13:35.675 }' 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@55 -- # [[ request: 00:13:35.675 { 00:13:35.675 "nqn": "nqn.2016-06.io.spdk:cnode4203", 00:13:35.675 "serial_number": "lsAvB3FdD DS5^l!qu'Q*", 00:13:35.675 "method": "nvmf_create_subsystem", 00:13:35.675 "req_id": 1 00:13:35.675 } 00:13:35.675 Got JSON-RPC error response 00:13:35.675 response: 00:13:35.675 { 00:13:35.675 "code": -32602, 00:13:35.675 "message": "Invalid SN lsAvB3FdD DS5^l!qu'Q*" 00:13:35.675 } == *\I\n\v\a\l\i\d\ \S\N* ]] 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@58 -- # gen_random_s 41 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@19 -- # local length=41 ll 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # chars=('32' '33' '34' '35' '36' '37' '38' '39' '40' '41' '42' '43' '44' '45' '46' '47' '48' '49' '50' '51' '52' '53' '54' '55' '56' '57' '58' '59' '60' '61' '62' '63' '64' '65' '66' '67' '68' '69' '70' '71' '72' '73' '74' '75' '76' '77' '78' '79' '80' '81' '82' '83' '84' '85' '86' '87' '88' '89' '90' '91' '92' '93' '94' '95' '96' '97' '98' '99' '100' '101' '102' '103' '104' '105' '106' '107' '108' '109' '110' '111' '112' '113' '114' '115' '116' '117' '118' '119' '120' '121' '122' '123' '124' '125' '126' '127') 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@21 -- # local chars 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@22 -- # local string 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll = 0 )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 92 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5c' 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='\' 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 81 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x51' 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Q 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 99 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x63' 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=c 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 90 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5a' 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=Z 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 35 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x23' 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='#' 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 40 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x28' 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='(' 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 119 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x77' 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=w 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 64 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x40' 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=@ 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 61 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3d' 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+== 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 93 00:13:35.675 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x5d' 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=']' 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 124 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7c' 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='|' 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 113 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x71' 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=q 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 122 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x7a' 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=z 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 51 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x33' 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=3 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 88 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x58' 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=X 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 50 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x32' 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=2 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.676 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 32 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x20' 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=' ' 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 103 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x67' 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=g 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 110 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6e' 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=n 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 48 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x30' 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=0 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 80 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x50' 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=P 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 112 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x70' 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=p 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 47 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x2f' 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=/ 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 109 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6d' 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=m 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 33 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x21' 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='!' 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 68 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x44' 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=D 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 76 00:13:35.936 11:59:22 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x4c' 00:13:35.936 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=L 00:13:35.936 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.936 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.936 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 73 00:13:35.936 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x49' 00:13:35.936 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=I 00:13:35.936 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 120 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x78' 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=x 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 69 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x45' 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=E 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 117 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x75' 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=u 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 116 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x74' 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=t 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 108 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x6c' 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=l 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 60 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x3c' 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+='<' 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # printf %x 100 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # echo -e '\x64' 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@25 -- # string+=d 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll++ )) 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@24 -- # (( ll < length )) 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@28 -- # [[ \ == \- ]] 00:13:35.937 11:59:23 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- target/invalid.sh@31 -- # echo '\Q=cZ#@X(w@=]3|qz3X2 Pgn0PPp/m!DLIxEutl /dev/null' 00:13:38.012 11:59:25 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.552 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:40.552 00:13:40.552 real 0m11.533s 00:13:40.552 user 0m19.509s 00:13:40.552 sys 0m4.845s 00:13:40.552 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:40.552 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_invalid -- common/autotest_common.sh@10 -- # set +x 00:13:40.552 ************************************ 00:13:40.552 END TEST nvmf_invalid 00:13:40.552 ************************************ 00:13:40.552 11:59:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:13:40.552 11:59:27 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@24 -- # run_test nvmf_connect_stress /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:40.552 11:59:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:40.552 11:59:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:40.552 11:59:27 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:40.552 ************************************ 00:13:40.552 START TEST nvmf_connect_stress 00:13:40.552 ************************************ 00:13:40.552 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh --transport=tcp 00:13:40.552 * Looking for test storage... 00:13:40.552 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:40.552 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:40.552 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # uname -s 00:13:40.552 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:40.552 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:40.552 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:40.552 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@5 -- # export PATH 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@47 -- # : 0 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@12 -- # nvmftestinit 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@285 -- # xtrace_disable 00:13:40.553 11:59:27 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # pci_devs=() 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@291 -- # local -a pci_devs 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # pci_net_devs=() 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # pci_drivers=() 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@293 -- # local -A pci_drivers 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # net_devs=() 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@295 -- # local -ga net_devs 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # e810=() 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@296 -- # local -ga e810 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # x722=() 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@297 -- # local -ga x722 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # mlx=() 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@298 -- # local -ga mlx 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:13:45.826 Found 0000:86:00.0 (0x8086 - 0x159b) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:13:45.826 Found 0000:86:00.1 (0x8086 - 0x159b) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:13:45.826 Found net devices under 0000:86:00.0: cvl_0_0 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@390 -- # [[ up == up ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:13:45.826 Found net devices under 0000:86:00.1: cvl_0_1 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@414 -- # is_hw=yes 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:13:45.826 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:13:45.827 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:13:45.827 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.307 ms 00:13:45.827 00:13:45.827 --- 10.0.0.2 ping statistics --- 00:13:45.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.827 rtt min/avg/max/mdev = 0.307/0.307/0.307/0.000 ms 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:13:45.827 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:13:45.827 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.286 ms 00:13:45.827 00:13:45.827 --- 10.0.0.1 ping statistics --- 00:13:45.827 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:13:45.827 rtt min/avg/max/mdev = 0.286/0.286/0.286/0.000 ms 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@422 -- # return 0 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@13 -- # nvmfappstart -m 0xE 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@722 -- # xtrace_disable 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@481 -- # nvmfpid=287128 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@482 -- # waitforlisten 287128 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@829 -- # '[' -z 287128 ']' 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@834 -- # local max_retries=100 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:45.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@838 -- # xtrace_disable 00:13:45.827 11:59:32 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:45.827 [2024-07-25 11:59:32.431062] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:13:45.827 [2024-07-25 11:59:32.431106] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:13:45.827 EAL: No free 2048 kB hugepages reported on node 1 00:13:45.827 [2024-07-25 11:59:32.487897] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:13:45.827 [2024-07-25 11:59:32.564031] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:13:45.827 [2024-07-25 11:59:32.564069] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:13:45.827 [2024-07-25 11:59:32.564076] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:13:45.827 [2024-07-25 11:59:32.564083] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:13:45.827 [2024-07-25 11:59:32.564088] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:13:45.827 [2024-07-25 11:59:32.564188] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:13:45.827 [2024-07-25 11:59:32.564280] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:13:45.827 [2024-07-25 11:59:32.564281] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:13:46.086 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:13:46.086 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@862 -- # return 0 00:13:46.086 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:13:46.086 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@728 -- # xtrace_disable 00:13:46.086 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.086 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:13:46.086 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.087 [2024-07-25 11:59:33.276462] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.087 [2024-07-25 11:59:33.316096] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.087 NULL1 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@21 -- # PERF_PID=287163 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@23 -- # rpcs=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@20 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/connect_stress/connect_stress -c 0x1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -t 10 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@25 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:46.087 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # seq 1 20 00:13:46.345 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.345 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.345 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.345 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.345 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.345 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.345 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.345 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.345 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.346 EAL: No free 2048 kB hugepages reported on node 1 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@27 -- # for i in $(seq 1 20) 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@28 -- # cat 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.346 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.605 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.605 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:46.605 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.605 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.605 11:59:33 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:46.864 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:46.864 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:46.864 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:46.864 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:46.864 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.434 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.434 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:47.434 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.434 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.434 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.694 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.694 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:47.694 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.694 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.694 11:59:34 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:47.954 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:47.954 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:47.954 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:47.954 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:47.954 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.213 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.213 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:48.213 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.213 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.213 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:48.473 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:48.473 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:48.473 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:48.473 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:48.473 11:59:35 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.042 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.042 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:49.042 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.042 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.042 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.301 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.301 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:49.301 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.301 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.301 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.560 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.560 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:49.560 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.561 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.561 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:49.819 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:49.819 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:49.819 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:49.819 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:49.820 11:59:36 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.079 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.079 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:50.079 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.079 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.079 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.648 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.648 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:50.648 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.648 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.648 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:50.908 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:50.908 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:50.908 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:50.908 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:50.908 11:59:37 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.166 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.166 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:51.166 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.166 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.166 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.425 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.425 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:51.425 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.425 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.425 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:51.684 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:51.684 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:51.684 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:51.684 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:51.684 11:59:38 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.252 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.252 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:52.252 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.252 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.252 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.511 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.511 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:52.511 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.511 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.511 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:52.771 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:52.771 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:52.771 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:52.771 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:52.771 11:59:39 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.031 11:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.031 11:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:53.031 11:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.031 11:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.031 11:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.327 11:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.327 11:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:53.327 11:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.327 11:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.327 11:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:53.897 11:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:53.897 11:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:53.897 11:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:53.897 11:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:53.897 11:59:40 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.156 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.156 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:54.156 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.156 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.156 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.416 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.416 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:54.416 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.416 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.416 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.675 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.675 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:54.675 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.675 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.675 11:59:41 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:54.935 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:54.935 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:54.935 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:54.935 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:54.935 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.504 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.504 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:55.504 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.504 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.504 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:55.764 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:55.764 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:55.764 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:55.764 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:55.764 11:59:42 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.024 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.024 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:56.024 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.024 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.024 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.288 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.288 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:56.288 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@35 -- # rpc_cmd 00:13:56.288 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@559 -- # xtrace_disable 00:13:56.288 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:56.551 Testing NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:13:56.551 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:13:56.551 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@34 -- # kill -0 287163 00:13:56.551 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/connect_stress.sh: line 34: kill: (287163) - No such process 00:13:56.551 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@38 -- # wait 287163 00:13:56.551 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@39 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpc.txt 00:13:56.551 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@41 -- # trap - SIGINT SIGTERM EXIT 00:13:56.551 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- target/connect_stress.sh@43 -- # nvmftestfini 00:13:56.551 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@488 -- # nvmfcleanup 00:13:56.551 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@117 -- # sync 00:13:56.551 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:13:56.551 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@120 -- # set +e 00:13:56.551 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@121 -- # for i in {1..20} 00:13:56.552 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:13:56.552 rmmod nvme_tcp 00:13:56.812 rmmod nvme_fabrics 00:13:56.812 rmmod nvme_keyring 00:13:56.812 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:13:56.812 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@124 -- # set -e 00:13:56.812 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@125 -- # return 0 00:13:56.812 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@489 -- # '[' -n 287128 ']' 00:13:56.812 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@490 -- # killprocess 287128 00:13:56.812 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@948 -- # '[' -z 287128 ']' 00:13:56.812 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@952 -- # kill -0 287128 00:13:56.812 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # uname 00:13:56.812 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:13:56.812 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 287128 00:13:56.812 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:13:56.812 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:13:56.812 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@966 -- # echo 'killing process with pid 287128' 00:13:56.812 killing process with pid 287128 00:13:56.812 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@967 -- # kill 287128 00:13:56.812 11:59:43 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@972 -- # wait 287128 00:13:57.072 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:13:57.072 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:13:57.072 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:13:57.072 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:13:57.072 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@278 -- # remove_spdk_ns 00:13:57.072 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:57.072 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:57.072 11:59:44 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:58.980 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:13:58.980 00:13:58.980 real 0m18.866s 00:13:58.980 user 0m41.376s 00:13:58.980 sys 0m7.986s 00:13:58.980 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@1124 -- # xtrace_disable 00:13:58.980 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_connect_stress -- common/autotest_common.sh@10 -- # set +x 00:13:58.980 ************************************ 00:13:58.980 END TEST nvmf_connect_stress 00:13:58.980 ************************************ 00:13:58.980 11:59:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:13:58.980 11:59:46 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@25 -- # run_test nvmf_fused_ordering /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:58.980 11:59:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:13:58.980 11:59:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:13:58.980 11:59:46 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:13:58.980 ************************************ 00:13:58.980 START TEST nvmf_fused_ordering 00:13:58.980 ************************************ 00:13:58.980 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/fused_ordering.sh --transport=tcp 00:13:59.241 * Looking for test storage... 00:13:59.241 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # uname -s 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@5 -- # export PATH 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@47 -- # : 0 00:13:59.241 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@51 -- # have_pci_nics=0 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@12 -- # nvmftestinit 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@448 -- # prepare_net_devs 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@410 -- # local -g is_hw=no 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@412 -- # remove_spdk_ns 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@285 -- # xtrace_disable 00:13:59.242 11:59:46 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # pci_devs=() 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # net_devs=() 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # e810=() 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@296 -- # local -ga e810 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # x722=() 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@297 -- # local -ga x722 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # mlx=() 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@298 -- # local -ga mlx 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:04.518 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:04.518 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:04.518 Found net devices under 0000:86:00.0: cvl_0_0 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:04.518 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:04.519 Found net devices under 0000:86:00.1: cvl_0_1 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@414 -- # is_hw=yes 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:04.519 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:04.519 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.209 ms 00:14:04.519 00:14:04.519 --- 10.0.0.2 ping statistics --- 00:14:04.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.519 rtt min/avg/max/mdev = 0.209/0.209/0.209/0.000 ms 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:04.519 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:04.519 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.451 ms 00:14:04.519 00:14:04.519 --- 10.0.0.1 ping statistics --- 00:14:04.519 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:04.519 rtt min/avg/max/mdev = 0.451/0.451/0.451/0.000 ms 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@422 -- # return 0 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@13 -- # nvmfappstart -m 0x2 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@481 -- # nvmfpid=292307 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@482 -- # waitforlisten 292307 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@829 -- # '[' -z 292307 ']' 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:04.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:04.519 11:59:51 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:04.519 [2024-07-25 11:59:51.553209] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:14:04.519 [2024-07-25 11:59:51.553256] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:04.519 EAL: No free 2048 kB hugepages reported on node 1 00:14:04.519 [2024-07-25 11:59:51.610187] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:04.519 [2024-07-25 11:59:51.690784] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:04.519 [2024-07-25 11:59:51.690820] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:04.519 [2024-07-25 11:59:51.690828] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:04.519 [2024-07-25 11:59:51.690834] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:04.519 [2024-07-25 11:59:51.690839] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:04.519 [2024-07-25 11:59:51.690855] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@862 -- # return 0 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@15 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.455 [2024-07-25 11:59:52.389755] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@16 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 10 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@17 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.455 [2024-07-25 11:59:52.409896] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@18 -- # rpc_cmd bdev_null_create NULL1 1000 512 00:14:05.455 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.456 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.456 NULL1 00:14:05.456 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.456 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@19 -- # rpc_cmd bdev_wait_for_examine 00:14:05.456 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.456 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.456 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.456 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 NULL1 00:14:05.456 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:05.456 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:05.456 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:05.456 11:59:52 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/fused_ordering/fused_ordering -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:14:05.456 [2024-07-25 11:59:52.462909] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:14:05.456 [2024-07-25 11:59:52.462940] [ DPDK EAL parameters: fused_ordering --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid292552 ] 00:14:05.456 EAL: No free 2048 kB hugepages reported on node 1 00:14:06.391 Attached to nqn.2016-06.io.spdk:cnode1 00:14:06.391 Namespace ID: 1 size: 1GB 00:14:06.391 fused_ordering(0) 00:14:06.391 fused_ordering(1) 00:14:06.391 fused_ordering(2) 00:14:06.391 fused_ordering(3) 00:14:06.391 fused_ordering(4) 00:14:06.391 fused_ordering(5) 00:14:06.391 fused_ordering(6) 00:14:06.391 fused_ordering(7) 00:14:06.391 fused_ordering(8) 00:14:06.391 fused_ordering(9) 00:14:06.391 fused_ordering(10) 00:14:06.391 fused_ordering(11) 00:14:06.391 fused_ordering(12) 00:14:06.391 fused_ordering(13) 00:14:06.391 fused_ordering(14) 00:14:06.391 fused_ordering(15) 00:14:06.391 fused_ordering(16) 00:14:06.391 fused_ordering(17) 00:14:06.391 fused_ordering(18) 00:14:06.391 fused_ordering(19) 00:14:06.391 fused_ordering(20) 00:14:06.391 fused_ordering(21) 00:14:06.391 fused_ordering(22) 00:14:06.391 fused_ordering(23) 00:14:06.391 fused_ordering(24) 00:14:06.391 fused_ordering(25) 00:14:06.391 fused_ordering(26) 00:14:06.391 fused_ordering(27) 00:14:06.391 fused_ordering(28) 00:14:06.391 fused_ordering(29) 00:14:06.391 fused_ordering(30) 00:14:06.391 fused_ordering(31) 00:14:06.391 fused_ordering(32) 00:14:06.391 fused_ordering(33) 00:14:06.391 fused_ordering(34) 00:14:06.391 fused_ordering(35) 00:14:06.391 fused_ordering(36) 00:14:06.391 fused_ordering(37) 00:14:06.391 fused_ordering(38) 00:14:06.391 fused_ordering(39) 00:14:06.391 fused_ordering(40) 00:14:06.391 fused_ordering(41) 00:14:06.391 fused_ordering(42) 00:14:06.391 fused_ordering(43) 00:14:06.391 fused_ordering(44) 00:14:06.391 fused_ordering(45) 00:14:06.391 fused_ordering(46) 00:14:06.391 fused_ordering(47) 00:14:06.391 fused_ordering(48) 00:14:06.391 fused_ordering(49) 00:14:06.391 fused_ordering(50) 00:14:06.391 fused_ordering(51) 00:14:06.391 fused_ordering(52) 00:14:06.391 fused_ordering(53) 00:14:06.391 fused_ordering(54) 00:14:06.391 fused_ordering(55) 00:14:06.391 fused_ordering(56) 00:14:06.391 fused_ordering(57) 00:14:06.391 fused_ordering(58) 00:14:06.391 fused_ordering(59) 00:14:06.391 fused_ordering(60) 00:14:06.391 fused_ordering(61) 00:14:06.391 fused_ordering(62) 00:14:06.391 fused_ordering(63) 00:14:06.391 fused_ordering(64) 00:14:06.391 fused_ordering(65) 00:14:06.391 fused_ordering(66) 00:14:06.391 fused_ordering(67) 00:14:06.391 fused_ordering(68) 00:14:06.391 fused_ordering(69) 00:14:06.391 fused_ordering(70) 00:14:06.391 fused_ordering(71) 00:14:06.391 fused_ordering(72) 00:14:06.391 fused_ordering(73) 00:14:06.391 fused_ordering(74) 00:14:06.391 fused_ordering(75) 00:14:06.391 fused_ordering(76) 00:14:06.391 fused_ordering(77) 00:14:06.391 fused_ordering(78) 00:14:06.391 fused_ordering(79) 00:14:06.391 fused_ordering(80) 00:14:06.391 fused_ordering(81) 00:14:06.391 fused_ordering(82) 00:14:06.391 fused_ordering(83) 00:14:06.391 fused_ordering(84) 00:14:06.391 fused_ordering(85) 00:14:06.391 fused_ordering(86) 00:14:06.391 fused_ordering(87) 00:14:06.391 fused_ordering(88) 00:14:06.391 fused_ordering(89) 00:14:06.391 fused_ordering(90) 00:14:06.391 fused_ordering(91) 00:14:06.391 fused_ordering(92) 00:14:06.391 fused_ordering(93) 00:14:06.391 fused_ordering(94) 00:14:06.391 fused_ordering(95) 00:14:06.391 fused_ordering(96) 00:14:06.391 fused_ordering(97) 00:14:06.391 fused_ordering(98) 00:14:06.391 fused_ordering(99) 00:14:06.391 fused_ordering(100) 00:14:06.391 fused_ordering(101) 00:14:06.391 fused_ordering(102) 00:14:06.391 fused_ordering(103) 00:14:06.391 fused_ordering(104) 00:14:06.391 fused_ordering(105) 00:14:06.391 fused_ordering(106) 00:14:06.391 fused_ordering(107) 00:14:06.391 fused_ordering(108) 00:14:06.391 fused_ordering(109) 00:14:06.391 fused_ordering(110) 00:14:06.391 fused_ordering(111) 00:14:06.391 fused_ordering(112) 00:14:06.391 fused_ordering(113) 00:14:06.391 fused_ordering(114) 00:14:06.391 fused_ordering(115) 00:14:06.391 fused_ordering(116) 00:14:06.391 fused_ordering(117) 00:14:06.391 fused_ordering(118) 00:14:06.391 fused_ordering(119) 00:14:06.391 fused_ordering(120) 00:14:06.391 fused_ordering(121) 00:14:06.391 fused_ordering(122) 00:14:06.391 fused_ordering(123) 00:14:06.391 fused_ordering(124) 00:14:06.391 fused_ordering(125) 00:14:06.391 fused_ordering(126) 00:14:06.391 fused_ordering(127) 00:14:06.391 fused_ordering(128) 00:14:06.391 fused_ordering(129) 00:14:06.391 fused_ordering(130) 00:14:06.391 fused_ordering(131) 00:14:06.391 fused_ordering(132) 00:14:06.391 fused_ordering(133) 00:14:06.391 fused_ordering(134) 00:14:06.391 fused_ordering(135) 00:14:06.391 fused_ordering(136) 00:14:06.391 fused_ordering(137) 00:14:06.391 fused_ordering(138) 00:14:06.391 fused_ordering(139) 00:14:06.391 fused_ordering(140) 00:14:06.391 fused_ordering(141) 00:14:06.391 fused_ordering(142) 00:14:06.391 fused_ordering(143) 00:14:06.391 fused_ordering(144) 00:14:06.391 fused_ordering(145) 00:14:06.391 fused_ordering(146) 00:14:06.391 fused_ordering(147) 00:14:06.391 fused_ordering(148) 00:14:06.391 fused_ordering(149) 00:14:06.391 fused_ordering(150) 00:14:06.391 fused_ordering(151) 00:14:06.391 fused_ordering(152) 00:14:06.391 fused_ordering(153) 00:14:06.391 fused_ordering(154) 00:14:06.391 fused_ordering(155) 00:14:06.391 fused_ordering(156) 00:14:06.391 fused_ordering(157) 00:14:06.391 fused_ordering(158) 00:14:06.391 fused_ordering(159) 00:14:06.391 fused_ordering(160) 00:14:06.391 fused_ordering(161) 00:14:06.391 fused_ordering(162) 00:14:06.391 fused_ordering(163) 00:14:06.391 fused_ordering(164) 00:14:06.391 fused_ordering(165) 00:14:06.391 fused_ordering(166) 00:14:06.391 fused_ordering(167) 00:14:06.391 fused_ordering(168) 00:14:06.391 fused_ordering(169) 00:14:06.391 fused_ordering(170) 00:14:06.391 fused_ordering(171) 00:14:06.391 fused_ordering(172) 00:14:06.391 fused_ordering(173) 00:14:06.391 fused_ordering(174) 00:14:06.391 fused_ordering(175) 00:14:06.391 fused_ordering(176) 00:14:06.391 fused_ordering(177) 00:14:06.391 fused_ordering(178) 00:14:06.391 fused_ordering(179) 00:14:06.391 fused_ordering(180) 00:14:06.391 fused_ordering(181) 00:14:06.391 fused_ordering(182) 00:14:06.391 fused_ordering(183) 00:14:06.391 fused_ordering(184) 00:14:06.391 fused_ordering(185) 00:14:06.391 fused_ordering(186) 00:14:06.391 fused_ordering(187) 00:14:06.391 fused_ordering(188) 00:14:06.391 fused_ordering(189) 00:14:06.391 fused_ordering(190) 00:14:06.391 fused_ordering(191) 00:14:06.391 fused_ordering(192) 00:14:06.391 fused_ordering(193) 00:14:06.391 fused_ordering(194) 00:14:06.391 fused_ordering(195) 00:14:06.391 fused_ordering(196) 00:14:06.391 fused_ordering(197) 00:14:06.391 fused_ordering(198) 00:14:06.391 fused_ordering(199) 00:14:06.391 fused_ordering(200) 00:14:06.391 fused_ordering(201) 00:14:06.391 fused_ordering(202) 00:14:06.391 fused_ordering(203) 00:14:06.391 fused_ordering(204) 00:14:06.391 fused_ordering(205) 00:14:07.328 fused_ordering(206) 00:14:07.328 fused_ordering(207) 00:14:07.328 fused_ordering(208) 00:14:07.328 fused_ordering(209) 00:14:07.328 fused_ordering(210) 00:14:07.328 fused_ordering(211) 00:14:07.328 fused_ordering(212) 00:14:07.328 fused_ordering(213) 00:14:07.328 fused_ordering(214) 00:14:07.328 fused_ordering(215) 00:14:07.328 fused_ordering(216) 00:14:07.328 fused_ordering(217) 00:14:07.328 fused_ordering(218) 00:14:07.328 fused_ordering(219) 00:14:07.328 fused_ordering(220) 00:14:07.328 fused_ordering(221) 00:14:07.328 fused_ordering(222) 00:14:07.328 fused_ordering(223) 00:14:07.328 fused_ordering(224) 00:14:07.328 fused_ordering(225) 00:14:07.328 fused_ordering(226) 00:14:07.328 fused_ordering(227) 00:14:07.328 fused_ordering(228) 00:14:07.328 fused_ordering(229) 00:14:07.328 fused_ordering(230) 00:14:07.328 fused_ordering(231) 00:14:07.328 fused_ordering(232) 00:14:07.328 fused_ordering(233) 00:14:07.328 fused_ordering(234) 00:14:07.328 fused_ordering(235) 00:14:07.328 fused_ordering(236) 00:14:07.328 fused_ordering(237) 00:14:07.328 fused_ordering(238) 00:14:07.328 fused_ordering(239) 00:14:07.328 fused_ordering(240) 00:14:07.328 fused_ordering(241) 00:14:07.328 fused_ordering(242) 00:14:07.328 fused_ordering(243) 00:14:07.328 fused_ordering(244) 00:14:07.328 fused_ordering(245) 00:14:07.328 fused_ordering(246) 00:14:07.328 fused_ordering(247) 00:14:07.328 fused_ordering(248) 00:14:07.328 fused_ordering(249) 00:14:07.328 fused_ordering(250) 00:14:07.328 fused_ordering(251) 00:14:07.328 fused_ordering(252) 00:14:07.328 fused_ordering(253) 00:14:07.328 fused_ordering(254) 00:14:07.328 fused_ordering(255) 00:14:07.328 fused_ordering(256) 00:14:07.328 fused_ordering(257) 00:14:07.328 fused_ordering(258) 00:14:07.328 fused_ordering(259) 00:14:07.328 fused_ordering(260) 00:14:07.328 fused_ordering(261) 00:14:07.328 fused_ordering(262) 00:14:07.328 fused_ordering(263) 00:14:07.328 fused_ordering(264) 00:14:07.328 fused_ordering(265) 00:14:07.328 fused_ordering(266) 00:14:07.328 fused_ordering(267) 00:14:07.328 fused_ordering(268) 00:14:07.328 fused_ordering(269) 00:14:07.328 fused_ordering(270) 00:14:07.328 fused_ordering(271) 00:14:07.328 fused_ordering(272) 00:14:07.328 fused_ordering(273) 00:14:07.328 fused_ordering(274) 00:14:07.328 fused_ordering(275) 00:14:07.328 fused_ordering(276) 00:14:07.328 fused_ordering(277) 00:14:07.328 fused_ordering(278) 00:14:07.328 fused_ordering(279) 00:14:07.328 fused_ordering(280) 00:14:07.328 fused_ordering(281) 00:14:07.328 fused_ordering(282) 00:14:07.328 fused_ordering(283) 00:14:07.328 fused_ordering(284) 00:14:07.328 fused_ordering(285) 00:14:07.328 fused_ordering(286) 00:14:07.328 fused_ordering(287) 00:14:07.328 fused_ordering(288) 00:14:07.328 fused_ordering(289) 00:14:07.328 fused_ordering(290) 00:14:07.328 fused_ordering(291) 00:14:07.328 fused_ordering(292) 00:14:07.328 fused_ordering(293) 00:14:07.328 fused_ordering(294) 00:14:07.328 fused_ordering(295) 00:14:07.328 fused_ordering(296) 00:14:07.328 fused_ordering(297) 00:14:07.328 fused_ordering(298) 00:14:07.328 fused_ordering(299) 00:14:07.328 fused_ordering(300) 00:14:07.328 fused_ordering(301) 00:14:07.328 fused_ordering(302) 00:14:07.328 fused_ordering(303) 00:14:07.328 fused_ordering(304) 00:14:07.328 fused_ordering(305) 00:14:07.328 fused_ordering(306) 00:14:07.328 fused_ordering(307) 00:14:07.328 fused_ordering(308) 00:14:07.328 fused_ordering(309) 00:14:07.328 fused_ordering(310) 00:14:07.328 fused_ordering(311) 00:14:07.328 fused_ordering(312) 00:14:07.328 fused_ordering(313) 00:14:07.328 fused_ordering(314) 00:14:07.328 fused_ordering(315) 00:14:07.328 fused_ordering(316) 00:14:07.328 fused_ordering(317) 00:14:07.328 fused_ordering(318) 00:14:07.328 fused_ordering(319) 00:14:07.328 fused_ordering(320) 00:14:07.328 fused_ordering(321) 00:14:07.328 fused_ordering(322) 00:14:07.328 fused_ordering(323) 00:14:07.328 fused_ordering(324) 00:14:07.328 fused_ordering(325) 00:14:07.328 fused_ordering(326) 00:14:07.328 fused_ordering(327) 00:14:07.328 fused_ordering(328) 00:14:07.328 fused_ordering(329) 00:14:07.328 fused_ordering(330) 00:14:07.328 fused_ordering(331) 00:14:07.328 fused_ordering(332) 00:14:07.328 fused_ordering(333) 00:14:07.328 fused_ordering(334) 00:14:07.328 fused_ordering(335) 00:14:07.328 fused_ordering(336) 00:14:07.328 fused_ordering(337) 00:14:07.328 fused_ordering(338) 00:14:07.328 fused_ordering(339) 00:14:07.328 fused_ordering(340) 00:14:07.328 fused_ordering(341) 00:14:07.328 fused_ordering(342) 00:14:07.328 fused_ordering(343) 00:14:07.328 fused_ordering(344) 00:14:07.328 fused_ordering(345) 00:14:07.328 fused_ordering(346) 00:14:07.328 fused_ordering(347) 00:14:07.328 fused_ordering(348) 00:14:07.328 fused_ordering(349) 00:14:07.328 fused_ordering(350) 00:14:07.328 fused_ordering(351) 00:14:07.328 fused_ordering(352) 00:14:07.328 fused_ordering(353) 00:14:07.328 fused_ordering(354) 00:14:07.328 fused_ordering(355) 00:14:07.328 fused_ordering(356) 00:14:07.328 fused_ordering(357) 00:14:07.328 fused_ordering(358) 00:14:07.328 fused_ordering(359) 00:14:07.328 fused_ordering(360) 00:14:07.328 fused_ordering(361) 00:14:07.328 fused_ordering(362) 00:14:07.328 fused_ordering(363) 00:14:07.328 fused_ordering(364) 00:14:07.328 fused_ordering(365) 00:14:07.328 fused_ordering(366) 00:14:07.328 fused_ordering(367) 00:14:07.328 fused_ordering(368) 00:14:07.328 fused_ordering(369) 00:14:07.328 fused_ordering(370) 00:14:07.328 fused_ordering(371) 00:14:07.328 fused_ordering(372) 00:14:07.328 fused_ordering(373) 00:14:07.328 fused_ordering(374) 00:14:07.328 fused_ordering(375) 00:14:07.328 fused_ordering(376) 00:14:07.328 fused_ordering(377) 00:14:07.328 fused_ordering(378) 00:14:07.328 fused_ordering(379) 00:14:07.328 fused_ordering(380) 00:14:07.328 fused_ordering(381) 00:14:07.328 fused_ordering(382) 00:14:07.328 fused_ordering(383) 00:14:07.328 fused_ordering(384) 00:14:07.328 fused_ordering(385) 00:14:07.328 fused_ordering(386) 00:14:07.328 fused_ordering(387) 00:14:07.328 fused_ordering(388) 00:14:07.328 fused_ordering(389) 00:14:07.328 fused_ordering(390) 00:14:07.328 fused_ordering(391) 00:14:07.328 fused_ordering(392) 00:14:07.328 fused_ordering(393) 00:14:07.328 fused_ordering(394) 00:14:07.328 fused_ordering(395) 00:14:07.328 fused_ordering(396) 00:14:07.328 fused_ordering(397) 00:14:07.328 fused_ordering(398) 00:14:07.328 fused_ordering(399) 00:14:07.328 fused_ordering(400) 00:14:07.328 fused_ordering(401) 00:14:07.328 fused_ordering(402) 00:14:07.328 fused_ordering(403) 00:14:07.328 fused_ordering(404) 00:14:07.328 fused_ordering(405) 00:14:07.328 fused_ordering(406) 00:14:07.328 fused_ordering(407) 00:14:07.328 fused_ordering(408) 00:14:07.328 fused_ordering(409) 00:14:07.328 fused_ordering(410) 00:14:08.265 fused_ordering(411) 00:14:08.265 fused_ordering(412) 00:14:08.265 fused_ordering(413) 00:14:08.265 fused_ordering(414) 00:14:08.265 fused_ordering(415) 00:14:08.265 fused_ordering(416) 00:14:08.265 fused_ordering(417) 00:14:08.265 fused_ordering(418) 00:14:08.265 fused_ordering(419) 00:14:08.265 fused_ordering(420) 00:14:08.265 fused_ordering(421) 00:14:08.265 fused_ordering(422) 00:14:08.265 fused_ordering(423) 00:14:08.265 fused_ordering(424) 00:14:08.265 fused_ordering(425) 00:14:08.265 fused_ordering(426) 00:14:08.265 fused_ordering(427) 00:14:08.265 fused_ordering(428) 00:14:08.265 fused_ordering(429) 00:14:08.265 fused_ordering(430) 00:14:08.265 fused_ordering(431) 00:14:08.265 fused_ordering(432) 00:14:08.265 fused_ordering(433) 00:14:08.265 fused_ordering(434) 00:14:08.265 fused_ordering(435) 00:14:08.265 fused_ordering(436) 00:14:08.265 fused_ordering(437) 00:14:08.265 fused_ordering(438) 00:14:08.265 fused_ordering(439) 00:14:08.265 fused_ordering(440) 00:14:08.265 fused_ordering(441) 00:14:08.265 fused_ordering(442) 00:14:08.265 fused_ordering(443) 00:14:08.265 fused_ordering(444) 00:14:08.265 fused_ordering(445) 00:14:08.265 fused_ordering(446) 00:14:08.265 fused_ordering(447) 00:14:08.265 fused_ordering(448) 00:14:08.265 fused_ordering(449) 00:14:08.265 fused_ordering(450) 00:14:08.265 fused_ordering(451) 00:14:08.265 fused_ordering(452) 00:14:08.265 fused_ordering(453) 00:14:08.265 fused_ordering(454) 00:14:08.265 fused_ordering(455) 00:14:08.265 fused_ordering(456) 00:14:08.265 fused_ordering(457) 00:14:08.265 fused_ordering(458) 00:14:08.265 fused_ordering(459) 00:14:08.265 fused_ordering(460) 00:14:08.265 fused_ordering(461) 00:14:08.265 fused_ordering(462) 00:14:08.265 fused_ordering(463) 00:14:08.265 fused_ordering(464) 00:14:08.265 fused_ordering(465) 00:14:08.265 fused_ordering(466) 00:14:08.265 fused_ordering(467) 00:14:08.265 fused_ordering(468) 00:14:08.265 fused_ordering(469) 00:14:08.265 fused_ordering(470) 00:14:08.265 fused_ordering(471) 00:14:08.265 fused_ordering(472) 00:14:08.265 fused_ordering(473) 00:14:08.265 fused_ordering(474) 00:14:08.265 fused_ordering(475) 00:14:08.265 fused_ordering(476) 00:14:08.265 fused_ordering(477) 00:14:08.265 fused_ordering(478) 00:14:08.265 fused_ordering(479) 00:14:08.265 fused_ordering(480) 00:14:08.265 fused_ordering(481) 00:14:08.265 fused_ordering(482) 00:14:08.265 fused_ordering(483) 00:14:08.265 fused_ordering(484) 00:14:08.265 fused_ordering(485) 00:14:08.265 fused_ordering(486) 00:14:08.265 fused_ordering(487) 00:14:08.265 fused_ordering(488) 00:14:08.265 fused_ordering(489) 00:14:08.265 fused_ordering(490) 00:14:08.265 fused_ordering(491) 00:14:08.265 fused_ordering(492) 00:14:08.265 fused_ordering(493) 00:14:08.265 fused_ordering(494) 00:14:08.265 fused_ordering(495) 00:14:08.265 fused_ordering(496) 00:14:08.265 fused_ordering(497) 00:14:08.265 fused_ordering(498) 00:14:08.266 fused_ordering(499) 00:14:08.266 fused_ordering(500) 00:14:08.266 fused_ordering(501) 00:14:08.266 fused_ordering(502) 00:14:08.266 fused_ordering(503) 00:14:08.266 fused_ordering(504) 00:14:08.266 fused_ordering(505) 00:14:08.266 fused_ordering(506) 00:14:08.266 fused_ordering(507) 00:14:08.266 fused_ordering(508) 00:14:08.266 fused_ordering(509) 00:14:08.266 fused_ordering(510) 00:14:08.266 fused_ordering(511) 00:14:08.266 fused_ordering(512) 00:14:08.266 fused_ordering(513) 00:14:08.266 fused_ordering(514) 00:14:08.266 fused_ordering(515) 00:14:08.266 fused_ordering(516) 00:14:08.266 fused_ordering(517) 00:14:08.266 fused_ordering(518) 00:14:08.266 fused_ordering(519) 00:14:08.266 fused_ordering(520) 00:14:08.266 fused_ordering(521) 00:14:08.266 fused_ordering(522) 00:14:08.266 fused_ordering(523) 00:14:08.266 fused_ordering(524) 00:14:08.266 fused_ordering(525) 00:14:08.266 fused_ordering(526) 00:14:08.266 fused_ordering(527) 00:14:08.266 fused_ordering(528) 00:14:08.266 fused_ordering(529) 00:14:08.266 fused_ordering(530) 00:14:08.266 fused_ordering(531) 00:14:08.266 fused_ordering(532) 00:14:08.266 fused_ordering(533) 00:14:08.266 fused_ordering(534) 00:14:08.266 fused_ordering(535) 00:14:08.266 fused_ordering(536) 00:14:08.266 fused_ordering(537) 00:14:08.266 fused_ordering(538) 00:14:08.266 fused_ordering(539) 00:14:08.266 fused_ordering(540) 00:14:08.266 fused_ordering(541) 00:14:08.266 fused_ordering(542) 00:14:08.266 fused_ordering(543) 00:14:08.266 fused_ordering(544) 00:14:08.266 fused_ordering(545) 00:14:08.266 fused_ordering(546) 00:14:08.266 fused_ordering(547) 00:14:08.266 fused_ordering(548) 00:14:08.266 fused_ordering(549) 00:14:08.266 fused_ordering(550) 00:14:08.266 fused_ordering(551) 00:14:08.266 fused_ordering(552) 00:14:08.266 fused_ordering(553) 00:14:08.266 fused_ordering(554) 00:14:08.266 fused_ordering(555) 00:14:08.266 fused_ordering(556) 00:14:08.266 fused_ordering(557) 00:14:08.266 fused_ordering(558) 00:14:08.266 fused_ordering(559) 00:14:08.266 fused_ordering(560) 00:14:08.266 fused_ordering(561) 00:14:08.266 fused_ordering(562) 00:14:08.266 fused_ordering(563) 00:14:08.266 fused_ordering(564) 00:14:08.266 fused_ordering(565) 00:14:08.266 fused_ordering(566) 00:14:08.266 fused_ordering(567) 00:14:08.266 fused_ordering(568) 00:14:08.266 fused_ordering(569) 00:14:08.266 fused_ordering(570) 00:14:08.266 fused_ordering(571) 00:14:08.266 fused_ordering(572) 00:14:08.266 fused_ordering(573) 00:14:08.266 fused_ordering(574) 00:14:08.266 fused_ordering(575) 00:14:08.266 fused_ordering(576) 00:14:08.266 fused_ordering(577) 00:14:08.266 fused_ordering(578) 00:14:08.266 fused_ordering(579) 00:14:08.266 fused_ordering(580) 00:14:08.266 fused_ordering(581) 00:14:08.266 fused_ordering(582) 00:14:08.266 fused_ordering(583) 00:14:08.266 fused_ordering(584) 00:14:08.266 fused_ordering(585) 00:14:08.266 fused_ordering(586) 00:14:08.266 fused_ordering(587) 00:14:08.266 fused_ordering(588) 00:14:08.266 fused_ordering(589) 00:14:08.266 fused_ordering(590) 00:14:08.266 fused_ordering(591) 00:14:08.266 fused_ordering(592) 00:14:08.266 fused_ordering(593) 00:14:08.266 fused_ordering(594) 00:14:08.266 fused_ordering(595) 00:14:08.266 fused_ordering(596) 00:14:08.266 fused_ordering(597) 00:14:08.266 fused_ordering(598) 00:14:08.266 fused_ordering(599) 00:14:08.266 fused_ordering(600) 00:14:08.266 fused_ordering(601) 00:14:08.266 fused_ordering(602) 00:14:08.266 fused_ordering(603) 00:14:08.266 fused_ordering(604) 00:14:08.266 fused_ordering(605) 00:14:08.266 fused_ordering(606) 00:14:08.266 fused_ordering(607) 00:14:08.266 fused_ordering(608) 00:14:08.266 fused_ordering(609) 00:14:08.266 fused_ordering(610) 00:14:08.266 fused_ordering(611) 00:14:08.266 fused_ordering(612) 00:14:08.266 fused_ordering(613) 00:14:08.266 fused_ordering(614) 00:14:08.266 fused_ordering(615) 00:14:09.202 fused_ordering(616) 00:14:09.202 fused_ordering(617) 00:14:09.202 fused_ordering(618) 00:14:09.202 fused_ordering(619) 00:14:09.202 fused_ordering(620) 00:14:09.202 fused_ordering(621) 00:14:09.202 fused_ordering(622) 00:14:09.202 fused_ordering(623) 00:14:09.202 fused_ordering(624) 00:14:09.202 fused_ordering(625) 00:14:09.202 fused_ordering(626) 00:14:09.202 fused_ordering(627) 00:14:09.202 fused_ordering(628) 00:14:09.202 fused_ordering(629) 00:14:09.202 fused_ordering(630) 00:14:09.202 fused_ordering(631) 00:14:09.202 fused_ordering(632) 00:14:09.202 fused_ordering(633) 00:14:09.202 fused_ordering(634) 00:14:09.202 fused_ordering(635) 00:14:09.202 fused_ordering(636) 00:14:09.202 fused_ordering(637) 00:14:09.202 fused_ordering(638) 00:14:09.202 fused_ordering(639) 00:14:09.202 fused_ordering(640) 00:14:09.202 fused_ordering(641) 00:14:09.202 fused_ordering(642) 00:14:09.202 fused_ordering(643) 00:14:09.202 fused_ordering(644) 00:14:09.202 fused_ordering(645) 00:14:09.202 fused_ordering(646) 00:14:09.202 fused_ordering(647) 00:14:09.202 fused_ordering(648) 00:14:09.202 fused_ordering(649) 00:14:09.202 fused_ordering(650) 00:14:09.202 fused_ordering(651) 00:14:09.202 fused_ordering(652) 00:14:09.202 fused_ordering(653) 00:14:09.202 fused_ordering(654) 00:14:09.202 fused_ordering(655) 00:14:09.202 fused_ordering(656) 00:14:09.202 fused_ordering(657) 00:14:09.202 fused_ordering(658) 00:14:09.202 fused_ordering(659) 00:14:09.202 fused_ordering(660) 00:14:09.202 fused_ordering(661) 00:14:09.202 fused_ordering(662) 00:14:09.202 fused_ordering(663) 00:14:09.202 fused_ordering(664) 00:14:09.202 fused_ordering(665) 00:14:09.202 fused_ordering(666) 00:14:09.202 fused_ordering(667) 00:14:09.202 fused_ordering(668) 00:14:09.202 fused_ordering(669) 00:14:09.202 fused_ordering(670) 00:14:09.202 fused_ordering(671) 00:14:09.202 fused_ordering(672) 00:14:09.202 fused_ordering(673) 00:14:09.202 fused_ordering(674) 00:14:09.202 fused_ordering(675) 00:14:09.202 fused_ordering(676) 00:14:09.202 fused_ordering(677) 00:14:09.202 fused_ordering(678) 00:14:09.202 fused_ordering(679) 00:14:09.202 fused_ordering(680) 00:14:09.202 fused_ordering(681) 00:14:09.202 fused_ordering(682) 00:14:09.202 fused_ordering(683) 00:14:09.202 fused_ordering(684) 00:14:09.202 fused_ordering(685) 00:14:09.202 fused_ordering(686) 00:14:09.202 fused_ordering(687) 00:14:09.202 fused_ordering(688) 00:14:09.202 fused_ordering(689) 00:14:09.202 fused_ordering(690) 00:14:09.202 fused_ordering(691) 00:14:09.202 fused_ordering(692) 00:14:09.202 fused_ordering(693) 00:14:09.202 fused_ordering(694) 00:14:09.202 fused_ordering(695) 00:14:09.202 fused_ordering(696) 00:14:09.202 fused_ordering(697) 00:14:09.202 fused_ordering(698) 00:14:09.202 fused_ordering(699) 00:14:09.202 fused_ordering(700) 00:14:09.202 fused_ordering(701) 00:14:09.202 fused_ordering(702) 00:14:09.202 fused_ordering(703) 00:14:09.202 fused_ordering(704) 00:14:09.202 fused_ordering(705) 00:14:09.202 fused_ordering(706) 00:14:09.202 fused_ordering(707) 00:14:09.202 fused_ordering(708) 00:14:09.202 fused_ordering(709) 00:14:09.202 fused_ordering(710) 00:14:09.202 fused_ordering(711) 00:14:09.202 fused_ordering(712) 00:14:09.202 fused_ordering(713) 00:14:09.202 fused_ordering(714) 00:14:09.202 fused_ordering(715) 00:14:09.202 fused_ordering(716) 00:14:09.202 fused_ordering(717) 00:14:09.202 fused_ordering(718) 00:14:09.202 fused_ordering(719) 00:14:09.202 fused_ordering(720) 00:14:09.202 fused_ordering(721) 00:14:09.202 fused_ordering(722) 00:14:09.202 fused_ordering(723) 00:14:09.203 fused_ordering(724) 00:14:09.203 fused_ordering(725) 00:14:09.203 fused_ordering(726) 00:14:09.203 fused_ordering(727) 00:14:09.203 fused_ordering(728) 00:14:09.203 fused_ordering(729) 00:14:09.203 fused_ordering(730) 00:14:09.203 fused_ordering(731) 00:14:09.203 fused_ordering(732) 00:14:09.203 fused_ordering(733) 00:14:09.203 fused_ordering(734) 00:14:09.203 fused_ordering(735) 00:14:09.203 fused_ordering(736) 00:14:09.203 fused_ordering(737) 00:14:09.203 fused_ordering(738) 00:14:09.203 fused_ordering(739) 00:14:09.203 fused_ordering(740) 00:14:09.203 fused_ordering(741) 00:14:09.203 fused_ordering(742) 00:14:09.203 fused_ordering(743) 00:14:09.203 fused_ordering(744) 00:14:09.203 fused_ordering(745) 00:14:09.203 fused_ordering(746) 00:14:09.203 fused_ordering(747) 00:14:09.203 fused_ordering(748) 00:14:09.203 fused_ordering(749) 00:14:09.203 fused_ordering(750) 00:14:09.203 fused_ordering(751) 00:14:09.203 fused_ordering(752) 00:14:09.203 fused_ordering(753) 00:14:09.203 fused_ordering(754) 00:14:09.203 fused_ordering(755) 00:14:09.203 fused_ordering(756) 00:14:09.203 fused_ordering(757) 00:14:09.203 fused_ordering(758) 00:14:09.203 fused_ordering(759) 00:14:09.203 fused_ordering(760) 00:14:09.203 fused_ordering(761) 00:14:09.203 fused_ordering(762) 00:14:09.203 fused_ordering(763) 00:14:09.203 fused_ordering(764) 00:14:09.203 fused_ordering(765) 00:14:09.203 fused_ordering(766) 00:14:09.203 fused_ordering(767) 00:14:09.203 fused_ordering(768) 00:14:09.203 fused_ordering(769) 00:14:09.203 fused_ordering(770) 00:14:09.203 fused_ordering(771) 00:14:09.203 fused_ordering(772) 00:14:09.203 fused_ordering(773) 00:14:09.203 fused_ordering(774) 00:14:09.203 fused_ordering(775) 00:14:09.203 fused_ordering(776) 00:14:09.203 fused_ordering(777) 00:14:09.203 fused_ordering(778) 00:14:09.203 fused_ordering(779) 00:14:09.203 fused_ordering(780) 00:14:09.203 fused_ordering(781) 00:14:09.203 fused_ordering(782) 00:14:09.203 fused_ordering(783) 00:14:09.203 fused_ordering(784) 00:14:09.203 fused_ordering(785) 00:14:09.203 fused_ordering(786) 00:14:09.203 fused_ordering(787) 00:14:09.203 fused_ordering(788) 00:14:09.203 fused_ordering(789) 00:14:09.203 fused_ordering(790) 00:14:09.203 fused_ordering(791) 00:14:09.203 fused_ordering(792) 00:14:09.203 fused_ordering(793) 00:14:09.203 fused_ordering(794) 00:14:09.203 fused_ordering(795) 00:14:09.203 fused_ordering(796) 00:14:09.203 fused_ordering(797) 00:14:09.203 fused_ordering(798) 00:14:09.203 fused_ordering(799) 00:14:09.203 fused_ordering(800) 00:14:09.203 fused_ordering(801) 00:14:09.203 fused_ordering(802) 00:14:09.203 fused_ordering(803) 00:14:09.203 fused_ordering(804) 00:14:09.203 fused_ordering(805) 00:14:09.203 fused_ordering(806) 00:14:09.203 fused_ordering(807) 00:14:09.203 fused_ordering(808) 00:14:09.203 fused_ordering(809) 00:14:09.203 fused_ordering(810) 00:14:09.203 fused_ordering(811) 00:14:09.203 fused_ordering(812) 00:14:09.203 fused_ordering(813) 00:14:09.203 fused_ordering(814) 00:14:09.203 fused_ordering(815) 00:14:09.203 fused_ordering(816) 00:14:09.203 fused_ordering(817) 00:14:09.203 fused_ordering(818) 00:14:09.203 fused_ordering(819) 00:14:09.203 fused_ordering(820) 00:14:10.581 fused_ordering(821) 00:14:10.581 fused_ordering(822) 00:14:10.581 fused_ordering(823) 00:14:10.581 fused_ordering(824) 00:14:10.581 fused_ordering(825) 00:14:10.581 fused_ordering(826) 00:14:10.581 fused_ordering(827) 00:14:10.581 fused_ordering(828) 00:14:10.581 fused_ordering(829) 00:14:10.581 fused_ordering(830) 00:14:10.581 fused_ordering(831) 00:14:10.581 fused_ordering(832) 00:14:10.581 fused_ordering(833) 00:14:10.581 fused_ordering(834) 00:14:10.581 fused_ordering(835) 00:14:10.581 fused_ordering(836) 00:14:10.581 fused_ordering(837) 00:14:10.581 fused_ordering(838) 00:14:10.581 fused_ordering(839) 00:14:10.581 fused_ordering(840) 00:14:10.581 fused_ordering(841) 00:14:10.581 fused_ordering(842) 00:14:10.581 fused_ordering(843) 00:14:10.581 fused_ordering(844) 00:14:10.581 fused_ordering(845) 00:14:10.581 fused_ordering(846) 00:14:10.581 fused_ordering(847) 00:14:10.581 fused_ordering(848) 00:14:10.581 fused_ordering(849) 00:14:10.581 fused_ordering(850) 00:14:10.581 fused_ordering(851) 00:14:10.581 fused_ordering(852) 00:14:10.581 fused_ordering(853) 00:14:10.581 fused_ordering(854) 00:14:10.581 fused_ordering(855) 00:14:10.581 fused_ordering(856) 00:14:10.581 fused_ordering(857) 00:14:10.581 fused_ordering(858) 00:14:10.581 fused_ordering(859) 00:14:10.581 fused_ordering(860) 00:14:10.581 fused_ordering(861) 00:14:10.581 fused_ordering(862) 00:14:10.581 fused_ordering(863) 00:14:10.581 fused_ordering(864) 00:14:10.581 fused_ordering(865) 00:14:10.581 fused_ordering(866) 00:14:10.581 fused_ordering(867) 00:14:10.581 fused_ordering(868) 00:14:10.581 fused_ordering(869) 00:14:10.581 fused_ordering(870) 00:14:10.581 fused_ordering(871) 00:14:10.581 fused_ordering(872) 00:14:10.581 fused_ordering(873) 00:14:10.581 fused_ordering(874) 00:14:10.581 fused_ordering(875) 00:14:10.581 fused_ordering(876) 00:14:10.581 fused_ordering(877) 00:14:10.581 fused_ordering(878) 00:14:10.581 fused_ordering(879) 00:14:10.581 fused_ordering(880) 00:14:10.581 fused_ordering(881) 00:14:10.581 fused_ordering(882) 00:14:10.581 fused_ordering(883) 00:14:10.581 fused_ordering(884) 00:14:10.581 fused_ordering(885) 00:14:10.581 fused_ordering(886) 00:14:10.581 fused_ordering(887) 00:14:10.581 fused_ordering(888) 00:14:10.581 fused_ordering(889) 00:14:10.581 fused_ordering(890) 00:14:10.581 fused_ordering(891) 00:14:10.581 fused_ordering(892) 00:14:10.581 fused_ordering(893) 00:14:10.581 fused_ordering(894) 00:14:10.581 fused_ordering(895) 00:14:10.581 fused_ordering(896) 00:14:10.581 fused_ordering(897) 00:14:10.581 fused_ordering(898) 00:14:10.581 fused_ordering(899) 00:14:10.581 fused_ordering(900) 00:14:10.581 fused_ordering(901) 00:14:10.581 fused_ordering(902) 00:14:10.581 fused_ordering(903) 00:14:10.581 fused_ordering(904) 00:14:10.581 fused_ordering(905) 00:14:10.581 fused_ordering(906) 00:14:10.581 fused_ordering(907) 00:14:10.581 fused_ordering(908) 00:14:10.581 fused_ordering(909) 00:14:10.581 fused_ordering(910) 00:14:10.581 fused_ordering(911) 00:14:10.581 fused_ordering(912) 00:14:10.581 fused_ordering(913) 00:14:10.581 fused_ordering(914) 00:14:10.581 fused_ordering(915) 00:14:10.581 fused_ordering(916) 00:14:10.581 fused_ordering(917) 00:14:10.581 fused_ordering(918) 00:14:10.581 fused_ordering(919) 00:14:10.581 fused_ordering(920) 00:14:10.581 fused_ordering(921) 00:14:10.581 fused_ordering(922) 00:14:10.581 fused_ordering(923) 00:14:10.581 fused_ordering(924) 00:14:10.581 fused_ordering(925) 00:14:10.581 fused_ordering(926) 00:14:10.581 fused_ordering(927) 00:14:10.581 fused_ordering(928) 00:14:10.581 fused_ordering(929) 00:14:10.581 fused_ordering(930) 00:14:10.581 fused_ordering(931) 00:14:10.581 fused_ordering(932) 00:14:10.581 fused_ordering(933) 00:14:10.581 fused_ordering(934) 00:14:10.581 fused_ordering(935) 00:14:10.581 fused_ordering(936) 00:14:10.581 fused_ordering(937) 00:14:10.581 fused_ordering(938) 00:14:10.581 fused_ordering(939) 00:14:10.581 fused_ordering(940) 00:14:10.581 fused_ordering(941) 00:14:10.581 fused_ordering(942) 00:14:10.581 fused_ordering(943) 00:14:10.581 fused_ordering(944) 00:14:10.581 fused_ordering(945) 00:14:10.581 fused_ordering(946) 00:14:10.581 fused_ordering(947) 00:14:10.581 fused_ordering(948) 00:14:10.581 fused_ordering(949) 00:14:10.581 fused_ordering(950) 00:14:10.581 fused_ordering(951) 00:14:10.581 fused_ordering(952) 00:14:10.581 fused_ordering(953) 00:14:10.581 fused_ordering(954) 00:14:10.581 fused_ordering(955) 00:14:10.581 fused_ordering(956) 00:14:10.581 fused_ordering(957) 00:14:10.581 fused_ordering(958) 00:14:10.581 fused_ordering(959) 00:14:10.581 fused_ordering(960) 00:14:10.581 fused_ordering(961) 00:14:10.581 fused_ordering(962) 00:14:10.581 fused_ordering(963) 00:14:10.581 fused_ordering(964) 00:14:10.581 fused_ordering(965) 00:14:10.581 fused_ordering(966) 00:14:10.581 fused_ordering(967) 00:14:10.581 fused_ordering(968) 00:14:10.581 fused_ordering(969) 00:14:10.581 fused_ordering(970) 00:14:10.581 fused_ordering(971) 00:14:10.581 fused_ordering(972) 00:14:10.581 fused_ordering(973) 00:14:10.581 fused_ordering(974) 00:14:10.581 fused_ordering(975) 00:14:10.581 fused_ordering(976) 00:14:10.581 fused_ordering(977) 00:14:10.581 fused_ordering(978) 00:14:10.581 fused_ordering(979) 00:14:10.581 fused_ordering(980) 00:14:10.581 fused_ordering(981) 00:14:10.581 fused_ordering(982) 00:14:10.581 fused_ordering(983) 00:14:10.581 fused_ordering(984) 00:14:10.581 fused_ordering(985) 00:14:10.581 fused_ordering(986) 00:14:10.581 fused_ordering(987) 00:14:10.581 fused_ordering(988) 00:14:10.581 fused_ordering(989) 00:14:10.581 fused_ordering(990) 00:14:10.581 fused_ordering(991) 00:14:10.581 fused_ordering(992) 00:14:10.581 fused_ordering(993) 00:14:10.581 fused_ordering(994) 00:14:10.581 fused_ordering(995) 00:14:10.581 fused_ordering(996) 00:14:10.581 fused_ordering(997) 00:14:10.581 fused_ordering(998) 00:14:10.581 fused_ordering(999) 00:14:10.581 fused_ordering(1000) 00:14:10.581 fused_ordering(1001) 00:14:10.581 fused_ordering(1002) 00:14:10.581 fused_ordering(1003) 00:14:10.581 fused_ordering(1004) 00:14:10.581 fused_ordering(1005) 00:14:10.581 fused_ordering(1006) 00:14:10.581 fused_ordering(1007) 00:14:10.581 fused_ordering(1008) 00:14:10.581 fused_ordering(1009) 00:14:10.581 fused_ordering(1010) 00:14:10.581 fused_ordering(1011) 00:14:10.581 fused_ordering(1012) 00:14:10.581 fused_ordering(1013) 00:14:10.581 fused_ordering(1014) 00:14:10.581 fused_ordering(1015) 00:14:10.581 fused_ordering(1016) 00:14:10.581 fused_ordering(1017) 00:14:10.581 fused_ordering(1018) 00:14:10.581 fused_ordering(1019) 00:14:10.581 fused_ordering(1020) 00:14:10.581 fused_ordering(1021) 00:14:10.581 fused_ordering(1022) 00:14:10.581 fused_ordering(1023) 00:14:10.581 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@23 -- # trap - SIGINT SIGTERM EXIT 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- target/fused_ordering.sh@25 -- # nvmftestfini 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@117 -- # sync 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@120 -- # set +e 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:10.582 rmmod nvme_tcp 00:14:10.582 rmmod nvme_fabrics 00:14:10.582 rmmod nvme_keyring 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@124 -- # set -e 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@125 -- # return 0 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@489 -- # '[' -n 292307 ']' 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@490 -- # killprocess 292307 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@948 -- # '[' -z 292307 ']' 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@952 -- # kill -0 292307 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # uname 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 292307 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@966 -- # echo 'killing process with pid 292307' 00:14:10.582 killing process with pid 292307 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@967 -- # kill 292307 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@972 -- # wait 292307 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:10.582 11:59:57 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:13.118 00:14:13.118 real 0m13.623s 00:14:13.118 user 0m9.303s 00:14:13.118 sys 0m7.545s 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_fused_ordering -- common/autotest_common.sh@10 -- # set +x 00:14:13.118 ************************************ 00:14:13.118 END TEST nvmf_fused_ordering 00:14:13.118 ************************************ 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@26 -- # run_test nvmf_ns_masking test/nvmf/target/ns_masking.sh --transport=tcp 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:13.118 ************************************ 00:14:13.118 START TEST nvmf_ns_masking 00:14:13.118 ************************************ 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1123 -- # test/nvmf/target/ns_masking.sh --transport=tcp 00:14:13.118 * Looking for test storage... 00:14:13.118 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@8 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # uname -s 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:13.118 11:59:59 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:13.118 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:13.118 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:13.118 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:13.118 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:13.118 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:13.118 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:13.118 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:13.118 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@5 -- # export PATH 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@47 -- # : 0 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@10 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@11 -- # hostsock=/var/tmp/host.sock 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@12 -- # loops=5 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # uuidgen 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@13 -- # ns1uuid=12248be9-0c25-4cf8-85ac-02c6381a7561 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # uuidgen 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@14 -- # ns2uuid=71fb53cd-76a5-4f12-a126-56d3abf89091 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@16 -- # SUBSYSNQN=nqn.2016-06.io.spdk:cnode1 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@17 -- # HOSTNQN1=nqn.2016-06.io.spdk:host1 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@18 -- # HOSTNQN2=nqn.2016-06.io.spdk:host2 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # uuidgen 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@19 -- # HOSTID=c3388497-17ca-458e-bd2e-af4ce6865aee 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@50 -- # nvmftestinit 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@285 -- # xtrace_disable 00:14:13.119 12:00:00 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # pci_devs=() 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # net_devs=() 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # e810=() 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@296 -- # local -ga e810 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # x722=() 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@297 -- # local -ga x722 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # mlx=() 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@298 -- # local -ga mlx 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:18.428 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:18.429 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:18.429 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:18.429 Found net devices under 0000:86:00.0: cvl_0_0 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:18.429 Found net devices under 0000:86:00.1: cvl_0_1 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@414 -- # is_hw=yes 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:18.429 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:18.429 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.274 ms 00:14:18.429 00:14:18.429 --- 10.0.0.2 ping statistics --- 00:14:18.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.429 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:18.429 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:18.429 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.341 ms 00:14:18.429 00:14:18.429 --- 10.0.0.1 ping statistics --- 00:14:18.429 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:18.429 rtt min/avg/max/mdev = 0.341/0.341/0.341/0.000 ms 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@422 -- # return 0 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@51 -- # nvmfappstart 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@481 -- # nvmfpid=296890 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@482 -- # waitforlisten 296890 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 296890 ']' 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:18.429 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:18.429 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:18.430 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:18.430 12:00:05 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:14:18.430 [2024-07-25 12:00:05.537623] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:14:18.430 [2024-07-25 12:00:05.537666] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:18.430 EAL: No free 2048 kB hugepages reported on node 1 00:14:18.430 [2024-07-25 12:00:05.593199] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:18.430 [2024-07-25 12:00:05.673453] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:18.430 [2024-07-25 12:00:05.673482] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:18.430 [2024-07-25 12:00:05.673489] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:18.430 [2024-07-25 12:00:05.673495] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:18.430 [2024-07-25 12:00:05.673501] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:18.430 [2024-07-25 12:00:05.673517] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:19.080 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:19.080 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:19.080 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:19.080 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:19.080 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:19.339 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:19.339 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:14:19.339 [2024-07-25 12:00:06.517055] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:19.339 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@55 -- # MALLOC_BDEV_SIZE=64 00:14:19.339 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@56 -- # MALLOC_BLOCK_SIZE=512 00:14:19.339 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:19.598 Malloc1 00:14:19.598 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:19.857 Malloc2 00:14:19.857 12:00:06 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME 00:14:19.857 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@63 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 00:14:20.116 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:20.374 [2024-07-25 12:00:07.442161] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:20.374 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@67 -- # connect 00:14:20.374 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c3388497-17ca-458e-bd2e-af4ce6865aee -a 10.0.0.2 -s 4420 -i 4 00:14:20.374 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 00:14:20.374 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:20.374 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:20.374 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n '' ]] 00:14:20.374 12:00:07 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:22.902 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:22.902 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@68 -- # ns_is_visible 0x1 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.903 [ 0]:0x1 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=106cb05d4fa64ac6b84f393163ddb6ed 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 106cb05d4fa64ac6b84f393163ddb6ed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@72 -- # ns_is_visible 0x1 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:22.903 [ 0]:0x1 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=106cb05d4fa64ac6b84f393163ddb6ed 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 106cb05d4fa64ac6b84f393163ddb6ed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@73 -- # ns_is_visible 0x2 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:22.903 [ 1]:0x2 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b1a5059f6ef74042b3b1a97d7cc89b04 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b1a5059f6ef74042b3b1a97d7cc89b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@75 -- # disconnect 00:14:22.903 12:00:09 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:22.903 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:22.903 12:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:23.162 12:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 --no-auto-visible 00:14:23.162 12:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@83 -- # connect 1 00:14:23.162 12:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c3388497-17ca-458e-bd2e-af4ce6865aee -a 10.0.0.2 -s 4420 -i 4 00:14:23.420 12:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 1 00:14:23.421 12:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:23.421 12:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:23.421 12:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 1 ]] 00:14:23.421 12:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=1 00:14:23.421 12:00:10 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=1 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@84 -- # NOT ns_is_visible 0x1 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@85 -- # ns_is_visible 0x2 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.953 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:25.953 [ 0]:0x2 00:14:25.954 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:25.954 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.954 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b1a5059f6ef74042b3b1a97d7cc89b04 00:14:25.954 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b1a5059f6ef74042b3b1a97d7cc89b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.954 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:25.954 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@89 -- # ns_is_visible 0x1 00:14:25.954 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.954 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:25.954 [ 0]:0x1 00:14:25.954 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:25.954 12:00:12 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.954 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=106cb05d4fa64ac6b84f393163ddb6ed 00:14:25.954 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 106cb05d4fa64ac6b84f393163ddb6ed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.954 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@90 -- # ns_is_visible 0x2 00:14:25.954 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:25.954 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:25.954 [ 1]:0x2 00:14:25.954 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:25.954 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:25.954 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b1a5059f6ef74042b3b1a97d7cc89b04 00:14:25.954 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b1a5059f6ef74042b3b1a97d7cc89b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:25.954 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@94 -- # NOT ns_is_visible 0x1 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@95 -- # ns_is_visible 0x2 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:26.213 [ 0]:0x2 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b1a5059f6ef74042b3b1a97d7cc89b04 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b1a5059f6ef74042b3b1a97d7cc89b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@97 -- # disconnect 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:26.213 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:26.213 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:26.471 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@101 -- # connect 2 00:14:26.471 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@22 -- # nvme connect -t tcp -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -I c3388497-17ca-458e-bd2e-af4ce6865aee -a 10.0.0.2 -s 4420 -i 4 00:14:26.471 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@24 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:26.471 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1198 -- # local i=0 00:14:26.471 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:26.471 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:26.471 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:26.471 12:00:13 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1205 -- # sleep 2 00:14:29.001 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:29.001 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:29.001 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:29.001 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:29.001 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:29.001 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1208 -- # return 0 00:14:29.001 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # jq -r '.[].Subsystems[] | select(.NQN=="nqn.2016-06.io.spdk:cnode1") | .Paths[0].Name' 00:14:29.001 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # nvme list-subsys -o json 00:14:29.001 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@26 -- # ctrl_id=nvme0 00:14:29.001 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@27 -- # [[ -z nvme0 ]] 00:14:29.001 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@102 -- # ns_is_visible 0x1 00:14:29.001 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.001 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:29.001 [ 0]:0x1 00:14:29.001 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:29.001 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.001 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=106cb05d4fa64ac6b84f393163ddb6ed 00:14:29.002 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 106cb05d4fa64ac6b84f393163ddb6ed != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.002 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@103 -- # ns_is_visible 0x2 00:14:29.002 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.002 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:29.002 [ 1]:0x2 00:14:29.002 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:29.002 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.002 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b1a5059f6ef74042b3b1a97d7cc89b04 00:14:29.002 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b1a5059f6ef74042b3b1a97d7cc89b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.002 12:00:15 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@106 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@107 -- # NOT ns_is_visible 0x1 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@108 -- # ns_is_visible 0x2 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:29.002 [ 0]:0x2 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b1a5059f6ef74042b3b1a97d7cc89b04 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b1a5059f6ef74042b3b1a97d7cc89b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@111 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py ]] 00:14:29.002 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_remove_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host1 00:14:29.260 [2024-07-25 12:00:16.319422] nvmf_rpc.c:1798:nvmf_rpc_ns_visible_paused: *ERROR*: Unable to add/remove nqn.2016-06.io.spdk:host1 to namespace ID 2 00:14:29.260 request: 00:14:29.260 { 00:14:29.260 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:14:29.260 "nsid": 2, 00:14:29.260 "host": "nqn.2016-06.io.spdk:host1", 00:14:29.260 "method": "nvmf_ns_remove_host", 00:14:29.260 "req_id": 1 00:14:29.260 } 00:14:29.260 Got JSON-RPC error response 00:14:29.260 response: 00:14:29.260 { 00:14:29.260 "code": -32602, 00:14:29.260 "message": "Invalid parameters" 00:14:29.260 } 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@112 -- # NOT ns_is_visible 0x1 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@648 -- # local es=0 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@650 -- # valid_exec_arg ns_is_visible 0x1 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@636 -- # local arg=ns_is_visible 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # type -t ns_is_visible 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # ns_is_visible 0x1 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x1 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x1 -o json 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=00000000000000000000000000000000 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ 00000000000000000000000000000000 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@651 -- # es=1 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@113 -- # ns_is_visible 0x2 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # nvme list-ns /dev/nvme0 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@43 -- # grep 0x2 00:14:29.260 [ 0]:0x2 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nvme id-ns /dev/nvme0 -n 0x2 -o json 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # jq -r .nguid 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@44 -- # nguid=b1a5059f6ef74042b3b1a97d7cc89b04 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@45 -- # [[ b1a5059f6ef74042b3b1a97d7cc89b04 != \0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0\0 ]] 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@114 -- # disconnect 00:14:29.260 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@38 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:29.519 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:29.519 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@118 -- # hostpid=299279 00:14:29.519 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@119 -- # trap 'killprocess $hostpid; nvmftestfini' SIGINT SIGTERM EXIT 00:14:29.519 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@121 -- # waitforlisten 299279 /var/tmp/host.sock 00:14:29.519 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@117 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -r /var/tmp/host.sock -m 2 00:14:29.519 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@829 -- # '[' -z 299279 ']' 00:14:29.519 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:14:29.519 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:29.519 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:14:29.519 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:14:29.519 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:29.519 12:00:16 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:29.519 [2024-07-25 12:00:16.678126] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:14:29.519 [2024-07-25 12:00:16.678171] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid299279 ] 00:14:29.519 EAL: No free 2048 kB hugepages reported on node 1 00:14:29.519 [2024-07-25 12:00:16.729876] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:29.777 [2024-07-25 12:00:16.808955] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:30.344 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:30.344 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@862 -- # return 0 00:14:30.344 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@122 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 1 00:14:30.603 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_ns nqn.2016-06.io.spdk:cnode1 2 00:14:30.603 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # uuid2nguid 12248be9-0c25-4cf8-85ac-02c6381a7561 00:14:30.603 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:30.603 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 1 -g 12248BE90C254CF885AC02C6381A7561 -i 00:14:30.861 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # uuid2nguid 71fb53cd-76a5-4f12-a126-56d3abf89091 00:14:30.861 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@759 -- # tr -d - 00:14:30.861 12:00:17 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@125 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc2 -n 2 -g 71FB53CD76A54F12A12656D3ABF89091 -i 00:14:31.119 12:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@126 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 1 nqn.2016-06.io.spdk:host1 00:14:31.119 12:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@127 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_ns_add_host nqn.2016-06.io.spdk:cnode1 2 nqn.2016-06.io.spdk:host2 00:14:31.377 12:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@129 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:31.377 12:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 -b nvme0 00:14:31.635 nvme0n1 00:14:31.635 12:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@131 -- # hostrpc bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:31.635 12:00:18 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 -b nvme1 00:14:31.894 nvme1n2 00:14:32.151 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # hostrpc bdev_get_bdevs 00:14:32.151 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # jq -r '.[].name' 00:14:32.151 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # sort 00:14:32.151 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs 00:14:32.151 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # xargs 00:14:32.151 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@134 -- # [[ nvme0n1 nvme1n2 == \n\v\m\e\0\n\1\ \n\v\m\e\1\n\2 ]] 00:14:32.151 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # hostrpc bdev_get_bdevs -b nvme0n1 00:14:32.151 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # jq -r '.[].uuid' 00:14:32.151 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme0n1 00:14:32.408 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@135 -- # [[ 12248be9-0c25-4cf8-85ac-02c6381a7561 == \1\2\2\4\8\b\e\9\-\0\c\2\5\-\4\c\f\8\-\8\5\a\c\-\0\2\c\6\3\8\1\a\7\5\6\1 ]] 00:14:32.408 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # jq -r '.[].uuid' 00:14:32.408 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # hostrpc bdev_get_bdevs -b nvme1n2 00:14:32.408 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_get_bdevs -b nvme1n2 00:14:32.667 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@136 -- # [[ 71fb53cd-76a5-4f12-a126-56d3abf89091 == \7\1\f\b\5\3\c\d\-\7\6\a\5\-\4\f\1\2\-\a\1\2\6\-\5\6\d\3\a\b\f\8\9\0\9\1 ]] 00:14:32.667 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@138 -- # killprocess 299279 00:14:32.667 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 299279 ']' 00:14:32.667 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 299279 00:14:32.667 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:32.667 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:32.667 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 299279 00:14:32.667 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:14:32.667 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:14:32.667 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 299279' 00:14:32.667 killing process with pid 299279 00:14:32.667 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 299279 00:14:32.667 12:00:19 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 299279 00:14:32.926 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@139 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@141 -- # trap - SIGINT SIGTERM EXIT 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- target/ns_masking.sh@142 -- # nvmftestfini 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@117 -- # sync 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@120 -- # set +e 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:33.185 rmmod nvme_tcp 00:14:33.185 rmmod nvme_fabrics 00:14:33.185 rmmod nvme_keyring 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@124 -- # set -e 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@125 -- # return 0 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@489 -- # '[' -n 296890 ']' 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@490 -- # killprocess 296890 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@948 -- # '[' -z 296890 ']' 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@952 -- # kill -0 296890 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # uname 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 296890 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@966 -- # echo 'killing process with pid 296890' 00:14:33.185 killing process with pid 296890 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@967 -- # kill 296890 00:14:33.185 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@972 -- # wait 296890 00:14:33.444 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:33.444 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:33.444 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:33.444 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:33.444 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:33.444 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:33.444 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:33.444 12:00:20 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.974 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:35.974 00:14:35.975 real 0m22.717s 00:14:35.975 user 0m24.344s 00:14:35.975 sys 0m6.120s 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_ns_masking -- common/autotest_common.sh@10 -- # set +x 00:14:35.975 ************************************ 00:14:35.975 END TEST nvmf_ns_masking 00:14:35.975 ************************************ 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@27 -- # [[ 1 -eq 1 ]] 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@28 -- # run_test nvmf_nvme_cli /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:35.975 ************************************ 00:14:35.975 START TEST nvmf_nvme_cli 00:14:35.975 ************************************ 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvme_cli.sh --transport=tcp 00:14:35.975 * Looking for test storage... 00:14:35.975 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # uname -s 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@5 -- # export PATH 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@47 -- # : 0 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@11 -- # MALLOC_BDEV_SIZE=64 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@14 -- # devs=() 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@16 -- # nvmftestinit 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@448 -- # prepare_net_devs 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@410 -- # local -g is_hw=no 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@412 -- # remove_spdk_ns 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@285 -- # xtrace_disable 00:14:35.975 12:00:22 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # pci_devs=() 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@291 -- # local -a pci_devs 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # pci_net_devs=() 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # pci_drivers=() 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@293 -- # local -A pci_drivers 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # net_devs=() 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@295 -- # local -ga net_devs 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # e810=() 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@296 -- # local -ga e810 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # x722=() 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@297 -- # local -ga x722 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # mlx=() 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@298 -- # local -ga mlx 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:14:41.276 Found 0000:86:00.0 (0x8086 - 0x159b) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:14:41.276 Found 0000:86:00.1 (0x8086 - 0x159b) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:14:41.276 Found net devices under 0000:86:00.0: cvl_0_0 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@390 -- # [[ up == up ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:14:41.276 Found net devices under 0000:86:00.1: cvl_0_1 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@414 -- # is_hw=yes 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:14:41.276 12:00:27 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:14:41.276 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:14:41.276 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:14:41.276 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:14:41.276 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:14:41.276 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.187 ms 00:14:41.276 00:14:41.276 --- 10.0.0.2 ping statistics --- 00:14:41.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.276 rtt min/avg/max/mdev = 0.187/0.187/0.187/0.000 ms 00:14:41.276 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:14:41.276 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:14:41.276 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.221 ms 00:14:41.276 00:14:41.276 --- 10.0.0.1 ping statistics --- 00:14:41.276 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:14:41.276 rtt min/avg/max/mdev = 0.221/0.221/0.221/0.000 ms 00:14:41.276 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:14:41.276 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@422 -- # return 0 00:14:41.276 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:14:41.276 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:14:41.276 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:14:41.276 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:14:41.276 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:14:41.276 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:14:41.276 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:14:41.276 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@17 -- # nvmfappstart -m 0xF 00:14:41.282 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:14:41.282 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@722 -- # xtrace_disable 00:14:41.282 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.282 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@481 -- # nvmfpid=303289 00:14:41.282 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@482 -- # waitforlisten 303289 00:14:41.282 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:14:41.282 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@829 -- # '[' -z 303289 ']' 00:14:41.282 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:41.282 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:41.282 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:41.282 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:41.282 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:41.282 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.282 [2024-07-25 12:00:28.147990] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:14:41.282 [2024-07-25 12:00:28.148039] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:41.282 EAL: No free 2048 kB hugepages reported on node 1 00:14:41.282 [2024-07-25 12:00:28.208738] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:41.282 [2024-07-25 12:00:28.290462] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:41.282 [2024-07-25 12:00:28.290502] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:41.282 [2024-07-25 12:00:28.290510] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:41.282 [2024-07-25 12:00:28.290516] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:41.282 [2024-07-25 12:00:28.290521] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:41.283 [2024-07-25 12:00:28.290570] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:41.283 [2024-07-25 12:00:28.290665] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:41.283 [2024-07-25 12:00:28.290749] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:41.283 [2024-07-25 12:00:28.290751] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.871 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:41.871 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@862 -- # return 0 00:14:41.871 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:14:41.871 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@728 -- # xtrace_disable 00:14:41.871 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.871 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:14:41.871 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@19 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:14:41.871 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.871 12:00:28 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.871 [2024-07-25 12:00:29.004490] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@21 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.871 Malloc0 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@22 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.871 Malloc1 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@24 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDKISFASTANDAWESOME -d SPDK_Controller1 -i 291 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@25 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@26 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@27 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.871 [2024-07-25 12:00:29.082108] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@28 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:41.871 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@30 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -a 10.0.0.2 -s 4420 00:14:42.130 00:14:42.130 Discovery Log Number of Records 2, Generation counter 2 00:14:42.130 =====Discovery Log Entry 0====== 00:14:42.130 trtype: tcp 00:14:42.130 adrfam: ipv4 00:14:42.130 subtype: current discovery subsystem 00:14:42.130 treq: not required 00:14:42.130 portid: 0 00:14:42.130 trsvcid: 4420 00:14:42.130 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:14:42.130 traddr: 10.0.0.2 00:14:42.130 eflags: explicit discovery connections, duplicate discovery information 00:14:42.130 sectype: none 00:14:42.130 =====Discovery Log Entry 1====== 00:14:42.130 trtype: tcp 00:14:42.130 adrfam: ipv4 00:14:42.130 subtype: nvme subsystem 00:14:42.130 treq: not required 00:14:42.130 portid: 0 00:14:42.130 trsvcid: 4420 00:14:42.130 subnqn: nqn.2016-06.io.spdk:cnode1 00:14:42.130 traddr: 10.0.0.2 00:14:42.130 eflags: none 00:14:42.130 sectype: none 00:14:42.130 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # devs=($(get_nvme_devs)) 00:14:42.130 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # get_nvme_devs 00:14:42.130 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:42.130 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.130 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:42.130 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:42.130 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.130 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:42.130 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:42.130 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@31 -- # nvme_num_before_connection=0 00:14:42.130 12:00:29 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@32 -- # nvme connect --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -t tcp -n nqn.2016-06.io.spdk:cnode1 -a 10.0.0.2 -s 4420 00:14:43.506 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@34 -- # waitforserial SPDKISFASTANDAWESOME 2 00:14:43.506 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1198 -- # local i=0 00:14:43.506 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1199 -- # local nvme_device_counter=1 nvme_devices=0 00:14:43.506 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1200 -- # [[ -n 2 ]] 00:14:43.506 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1201 -- # nvme_device_counter=2 00:14:43.506 12:00:30 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1205 -- # sleep 2 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1206 -- # (( i++ <= 15 )) 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # lsblk -l -o NAME,SERIAL 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # grep -c SPDKISFASTANDAWESOME 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1207 -- # nvme_devices=2 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # (( nvme_devices == nvme_device_counter )) 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1208 -- # return 0 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # get_nvme_devs 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@35 -- # [[ -z /dev/nvme0n2 00:14:45.410 /dev/nvme0n1 ]] 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # devs=($(get_nvme_devs)) 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # get_nvme_devs 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@522 -- # local dev _ 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.410 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@521 -- # nvme list 00:14:45.671 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ Node == /dev/nvme* ]] 00:14:45.672 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.672 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ --------------------- == /dev/nvme* ]] 00:14:45.672 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.672 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n2 == /dev/nvme* ]] 00:14:45.672 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n2 00:14:45.672 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.672 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@525 -- # [[ /dev/nvme0n1 == /dev/nvme* ]] 00:14:45.672 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@526 -- # echo /dev/nvme0n1 00:14:45.672 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@524 -- # read -r dev _ 00:14:45.672 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@59 -- # nvme_num=2 00:14:45.672 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@60 -- # nvme disconnect -n nqn.2016-06.io.spdk:cnode1 00:14:45.933 NQN:nqn.2016-06.io.spdk:cnode1 disconnected 1 controller(s) 00:14:45.933 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@61 -- # waitforserial_disconnect SPDKISFASTANDAWESOME 00:14:45.933 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1219 -- # local i=0 00:14:45.933 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # lsblk -o NAME,SERIAL 00:14:45.933 12:00:32 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1220 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # lsblk -l -o NAME,SERIAL 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1227 -- # grep -q -w SPDKISFASTANDAWESOME 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1231 -- # return 0 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@62 -- # (( nvme_num <= nvme_num_before_connection )) 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@67 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@559 -- # xtrace_disable 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@68 -- # trap - SIGINT SIGTERM EXIT 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- target/nvme_cli.sh@70 -- # nvmftestfini 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@488 -- # nvmfcleanup 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@117 -- # sync 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@120 -- # set +e 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@121 -- # for i in {1..20} 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:14:45.933 rmmod nvme_tcp 00:14:45.933 rmmod nvme_fabrics 00:14:45.933 rmmod nvme_keyring 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@124 -- # set -e 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@125 -- # return 0 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@489 -- # '[' -n 303289 ']' 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@490 -- # killprocess 303289 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@948 -- # '[' -z 303289 ']' 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@952 -- # kill -0 303289 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # uname 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:14:45.933 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 303289 00:14:45.934 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:14:45.934 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:14:45.934 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@966 -- # echo 'killing process with pid 303289' 00:14:45.934 killing process with pid 303289 00:14:45.934 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@967 -- # kill 303289 00:14:45.934 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@972 -- # wait 303289 00:14:46.199 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:14:46.199 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:14:46.199 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:14:46.199 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:14:46.199 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@278 -- # remove_spdk_ns 00:14:46.199 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:14:46.199 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:14:46.199 12:00:33 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:14:48.755 00:14:48.755 real 0m12.748s 00:14:48.755 user 0m21.690s 00:14:48.755 sys 0m4.544s 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@1124 -- # xtrace_disable 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_nvme_cli -- common/autotest_common.sh@10 -- # set +x 00:14:48.755 ************************************ 00:14:48.755 END TEST nvmf_nvme_cli 00:14:48.755 ************************************ 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@30 -- # [[ 1 -eq 1 ]] 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@31 -- # run_test nvmf_vfio_user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:14:48.755 ************************************ 00:14:48.755 START TEST nvmf_vfio_user 00:14:48.755 ************************************ 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/nvmf_vfio_user.sh --transport=tcp 00:14:48.755 * Looking for test storage... 00:14:48.755 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # uname -s 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@5 -- # export PATH 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@47 -- # : 0 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- nvmf/common.sh@51 -- # have_pci_nics=0 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@12 -- # MALLOC_BDEV_SIZE=64 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@14 -- # NUM_DEVICES=2 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@16 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@47 -- # rm -rf /var/run/vfio-user 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@103 -- # setup_nvmf_vfio_user '' '' 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args= 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local transport_args= 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=304794 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 304794' 00:14:48.755 Process pid: 304794 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 304794 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 304794 ']' 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:48.755 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:48.755 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:14:48.756 12:00:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:14:48.756 [2024-07-25 12:00:35.667061] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:14:48.756 [2024-07-25 12:00:35.667112] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:14:48.756 EAL: No free 2048 kB hugepages reported on node 1 00:14:48.756 [2024-07-25 12:00:35.725255] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:14:48.756 [2024-07-25 12:00:35.806176] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:14:48.756 [2024-07-25 12:00:35.806217] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:14:48.756 [2024-07-25 12:00:35.806224] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:14:48.756 [2024-07-25 12:00:35.806230] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:14:48.756 [2024-07-25 12:00:35.806235] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:14:48.756 [2024-07-25 12:00:35.806290] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:14:48.756 [2024-07-25 12:00:35.806303] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:14:48.756 [2024-07-25 12:00:35.806398] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:14:48.756 [2024-07-25 12:00:35.806399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:14:49.322 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:14:49.322 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:14:49.322 12:00:36 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:14:50.256 12:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER 00:14:50.513 12:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:14:50.513 12:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:14:50.513 12:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:50.513 12:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:14:50.513 12:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:14:50.771 Malloc1 00:14:50.771 12:00:37 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:14:51.029 12:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:14:51.029 12:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:14:51.287 12:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:51.287 12:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:14:51.287 12:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:14:51.545 Malloc2 00:14:51.545 12:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:14:51.803 12:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:14:51.803 12:00:38 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:14:52.063 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@104 -- # run_nvmf_vfio_user 00:14:52.063 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # seq 1 2 00:14:52.063 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:14:52.063 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user1/1 00:14:52.063 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode1 00:14:52.063 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -L nvme -L nvme_vfio -L vfio_pci 00:14:52.063 [2024-07-25 12:00:39.198281] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:14:52.063 [2024-07-25 12:00:39.198306] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid305290 ] 00:14:52.063 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.063 [2024-07-25 12:00:39.226565] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user1/1 00:14:52.063 [2024-07-25 12:00:39.236452] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:52.063 [2024-07-25 12:00:39.236472] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7f685628b000 00:14:52.063 [2024-07-25 12:00:39.237457] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.063 [2024-07-25 12:00:39.238460] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.063 [2024-07-25 12:00:39.239467] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.063 [2024-07-25 12:00:39.240473] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:52.063 [2024-07-25 12:00:39.241474] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:52.063 [2024-07-25 12:00:39.242485] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.063 [2024-07-25 12:00:39.243494] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:14:52.063 [2024-07-25 12:00:39.244499] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:14:52.063 [2024-07-25 12:00:39.245507] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:14:52.063 [2024-07-25 12:00:39.245515] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7f6856280000 00:14:52.063 [2024-07-25 12:00:39.246457] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:52.063 [2024-07-25 12:00:39.255074] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user1/1/cntrl Setup Successfully 00:14:52.063 [2024-07-25 12:00:39.255105] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to connect adminq (no timeout) 00:14:52.063 [2024-07-25 12:00:39.260602] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:52.063 [2024-07-25 12:00:39.260640] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:14:52.063 [2024-07-25 12:00:39.260719] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for connect adminq (no timeout) 00:14:52.063 [2024-07-25 12:00:39.260736] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs (no timeout) 00:14:52.063 [2024-07-25 12:00:39.260741] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read vs wait for vs (no timeout) 00:14:52.063 [2024-07-25 12:00:39.261594] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x8, value 0x10300 00:14:52.063 [2024-07-25 12:00:39.261606] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap (no timeout) 00:14:52.063 [2024-07-25 12:00:39.261612] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to read cap wait for cap (no timeout) 00:14:52.063 [2024-07-25 12:00:39.262598] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x0, value 0x201e0100ff 00:14:52.063 [2024-07-25 12:00:39.262606] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en (no timeout) 00:14:52.063 [2024-07-25 12:00:39.262613] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to check en wait for cc (timeout 15000 ms) 00:14:52.063 [2024-07-25 12:00:39.263603] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x0 00:14:52.064 [2024-07-25 12:00:39.263611] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:14:52.064 [2024-07-25 12:00:39.264608] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x0 00:14:52.064 [2024-07-25 12:00:39.264616] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 0 && CSTS.RDY = 0 00:14:52.064 [2024-07-25 12:00:39.264621] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to controller is disabled (timeout 15000 ms) 00:14:52.064 [2024-07-25 12:00:39.264626] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:14:52.064 [2024-07-25 12:00:39.264731] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Setting CC.EN = 1 00:14:52.064 [2024-07-25 12:00:39.264736] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:14:52.064 [2024-07-25 12:00:39.264743] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x28, value 0x2000003c0000 00:14:52.064 [2024-07-25 12:00:39.265617] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x30, value 0x2000003be000 00:14:52.064 [2024-07-25 12:00:39.266622] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x24, value 0xff00ff 00:14:52.064 [2024-07-25 12:00:39.267631] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:52.064 [2024-07-25 12:00:39.268630] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:52.064 [2024-07-25 12:00:39.268707] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:14:52.064 [2024-07-25 12:00:39.269644] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x1 00:14:52.064 [2024-07-25 12:00:39.269651] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:14:52.064 [2024-07-25 12:00:39.269655] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to reset admin queue (timeout 30000 ms) 00:14:52.064 [2024-07-25 12:00:39.269672] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller (no timeout) 00:14:52.064 [2024-07-25 12:00:39.269679] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify controller (timeout 30000 ms) 00:14:52.064 [2024-07-25 12:00:39.269694] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:52.064 [2024-07-25 12:00:39.269698] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.064 [2024-07-25 12:00:39.269702] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.064 [2024-07-25 12:00:39.269715] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.064 [2024-07-25 12:00:39.269764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:14:52.064 [2024-07-25 12:00:39.269773] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_xfer_size 131072 00:14:52.064 [2024-07-25 12:00:39.269778] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] MDTS max_xfer_size 131072 00:14:52.064 [2024-07-25 12:00:39.269781] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] CNTLID 0x0001 00:14:52.064 [2024-07-25 12:00:39.269786] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:14:52.064 [2024-07-25 12:00:39.269790] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] transport max_sges 1 00:14:52.064 [2024-07-25 12:00:39.269794] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] fuses compare and write: 1 00:14:52.064 [2024-07-25 12:00:39.269798] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to configure AER (timeout 30000 ms) 00:14:52.064 [2024-07-25 12:00:39.269805] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for configure aer (timeout 30000 ms) 00:14:52.064 [2024-07-25 12:00:39.269818] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:14:52.064 [2024-07-25 12:00:39.269831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:14:52.064 [2024-07-25 12:00:39.269843] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.064 [2024-07-25 12:00:39.269851] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.064 [2024-07-25 12:00:39.269858] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.064 [2024-07-25 12:00:39.269865] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:14:52.064 [2024-07-25 12:00:39.269869] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set keep alive timeout (timeout 30000 ms) 00:14:52.064 [2024-07-25 12:00:39.269877] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:14:52.064 [2024-07-25 12:00:39.269885] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:14:52.064 [2024-07-25 12:00:39.269893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:14:52.064 [2024-07-25 12:00:39.269897] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Controller adjusted keep alive timeout to 0 ms 00:14:52.064 [2024-07-25 12:00:39.269902] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify controller iocs specific (timeout 30000 ms) 00:14:52.064 [2024-07-25 12:00:39.269910] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set number of queues (timeout 30000 ms) 00:14:52.064 [2024-07-25 12:00:39.269915] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for set number of queues (timeout 30000 ms) 00:14:52.064 [2024-07-25 12:00:39.269923] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:52.064 [2024-07-25 12:00:39.269932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:14:52.064 [2024-07-25 12:00:39.269983] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify active ns (timeout 30000 ms) 00:14:52.064 [2024-07-25 12:00:39.269990] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify active ns (timeout 30000 ms) 00:14:52.064 [2024-07-25 12:00:39.269997] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:14:52.064 [2024-07-25 12:00:39.270001] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:14:52.064 [2024-07-25 12:00:39.270004] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.064 [2024-07-25 12:00:39.270009] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:14:52.064 [2024-07-25 12:00:39.270020] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:14:52.064 [2024-07-25 12:00:39.270029] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Namespace 1 was added 00:14:52.064 [2024-07-25 12:00:39.270041] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns (timeout 30000 ms) 00:14:52.064 [2024-07-25 12:00:39.270052] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify ns (timeout 30000 ms) 00:14:52.064 [2024-07-25 12:00:39.270060] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:52.064 [2024-07-25 12:00:39.270064] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.064 [2024-07-25 12:00:39.270067] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.064 [2024-07-25 12:00:39.270072] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.064 [2024-07-25 12:00:39.270089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:14:52.064 [2024-07-25 12:00:39.270102] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:14:52.065 [2024-07-25 12:00:39.270109] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:14:52.065 [2024-07-25 12:00:39.270115] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:14:52.065 [2024-07-25 12:00:39.270118] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.065 [2024-07-25 12:00:39.270121] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.065 [2024-07-25 12:00:39.270127] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.065 [2024-07-25 12:00:39.270141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:14:52.065 [2024-07-25 12:00:39.270148] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to identify ns iocs specific (timeout 30000 ms) 00:14:52.065 [2024-07-25 12:00:39.270154] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported log pages (timeout 30000 ms) 00:14:52.065 [2024-07-25 12:00:39.270161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set supported features (timeout 30000 ms) 00:14:52.065 [2024-07-25 12:00:39.270169] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host behavior support feature (timeout 30000 ms) 00:14:52.065 [2024-07-25 12:00:39.270173] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set doorbell buffer config (timeout 30000 ms) 00:14:52.065 [2024-07-25 12:00:39.270178] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to set host ID (timeout 30000 ms) 00:14:52.065 [2024-07-25 12:00:39.270182] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] NVMe-oF transport - not sending Set Features - Host ID 00:14:52.065 [2024-07-25 12:00:39.270186] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to transport ready (timeout 30000 ms) 00:14:52.065 [2024-07-25 12:00:39.270190] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] setting state to ready (no timeout) 00:14:52.065 [2024-07-25 12:00:39.270208] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:14:52.065 [2024-07-25 12:00:39.270219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:14:52.065 [2024-07-25 12:00:39.270229] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:14:52.065 [2024-07-25 12:00:39.270239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:14:52.065 [2024-07-25 12:00:39.270248] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:14:52.065 [2024-07-25 12:00:39.270259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:14:52.065 [2024-07-25 12:00:39.270268] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:14:52.065 [2024-07-25 12:00:39.270280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:14:52.065 [2024-07-25 12:00:39.270291] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:14:52.065 [2024-07-25 12:00:39.270296] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:14:52.065 [2024-07-25 12:00:39.270299] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:14:52.065 [2024-07-25 12:00:39.270302] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:14:52.065 [2024-07-25 12:00:39.270305] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:14:52.065 [2024-07-25 12:00:39.270310] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:14:52.065 [2024-07-25 12:00:39.270316] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:14:52.065 [2024-07-25 12:00:39.270319] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:14:52.065 [2024-07-25 12:00:39.270322] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.065 [2024-07-25 12:00:39.270328] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:14:52.065 [2024-07-25 12:00:39.270334] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:14:52.065 [2024-07-25 12:00:39.270337] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:14:52.065 [2024-07-25 12:00:39.270340] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.065 [2024-07-25 12:00:39.270345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:14:52.065 [2024-07-25 12:00:39.270352] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:14:52.065 [2024-07-25 12:00:39.270356] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:14:52.065 [2024-07-25 12:00:39.270358] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:14:52.065 [2024-07-25 12:00:39.270364] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:14:52.065 [2024-07-25 12:00:39.270370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:14:52.065 [2024-07-25 12:00:39.270382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:14:52.065 [2024-07-25 12:00:39.270392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:14:52.065 [2024-07-25 12:00:39.270398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:14:52.065 ===================================================== 00:14:52.065 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:52.065 ===================================================== 00:14:52.065 Controller Capabilities/Features 00:14:52.065 ================================ 00:14:52.065 Vendor ID: 4e58 00:14:52.065 Subsystem Vendor ID: 4e58 00:14:52.065 Serial Number: SPDK1 00:14:52.065 Model Number: SPDK bdev Controller 00:14:52.065 Firmware Version: 24.09 00:14:52.065 Recommended Arb Burst: 6 00:14:52.065 IEEE OUI Identifier: 8d 6b 50 00:14:52.065 Multi-path I/O 00:14:52.065 May have multiple subsystem ports: Yes 00:14:52.065 May have multiple controllers: Yes 00:14:52.065 Associated with SR-IOV VF: No 00:14:52.065 Max Data Transfer Size: 131072 00:14:52.065 Max Number of Namespaces: 32 00:14:52.065 Max Number of I/O Queues: 127 00:14:52.065 NVMe Specification Version (VS): 1.3 00:14:52.065 NVMe Specification Version (Identify): 1.3 00:14:52.065 Maximum Queue Entries: 256 00:14:52.065 Contiguous Queues Required: Yes 00:14:52.065 Arbitration Mechanisms Supported 00:14:52.065 Weighted Round Robin: Not Supported 00:14:52.065 Vendor Specific: Not Supported 00:14:52.065 Reset Timeout: 15000 ms 00:14:52.065 Doorbell Stride: 4 bytes 00:14:52.065 NVM Subsystem Reset: Not Supported 00:14:52.065 Command Sets Supported 00:14:52.065 NVM Command Set: Supported 00:14:52.065 Boot Partition: Not Supported 00:14:52.065 Memory Page Size Minimum: 4096 bytes 00:14:52.065 Memory Page Size Maximum: 4096 bytes 00:14:52.065 Persistent Memory Region: Not Supported 00:14:52.065 Optional Asynchronous Events Supported 00:14:52.065 Namespace Attribute Notices: Supported 00:14:52.065 Firmware Activation Notices: Not Supported 00:14:52.065 ANA Change Notices: Not Supported 00:14:52.065 PLE Aggregate Log Change Notices: Not Supported 00:14:52.065 LBA Status Info Alert Notices: Not Supported 00:14:52.065 EGE Aggregate Log Change Notices: Not Supported 00:14:52.065 Normal NVM Subsystem Shutdown event: Not Supported 00:14:52.065 Zone Descriptor Change Notices: Not Supported 00:14:52.065 Discovery Log Change Notices: Not Supported 00:14:52.065 Controller Attributes 00:14:52.065 128-bit Host Identifier: Supported 00:14:52.065 Non-Operational Permissive Mode: Not Supported 00:14:52.065 NVM Sets: Not Supported 00:14:52.065 Read Recovery Levels: Not Supported 00:14:52.065 Endurance Groups: Not Supported 00:14:52.065 Predictable Latency Mode: Not Supported 00:14:52.065 Traffic Based Keep ALive: Not Supported 00:14:52.065 Namespace Granularity: Not Supported 00:14:52.065 SQ Associations: Not Supported 00:14:52.065 UUID List: Not Supported 00:14:52.065 Multi-Domain Subsystem: Not Supported 00:14:52.065 Fixed Capacity Management: Not Supported 00:14:52.065 Variable Capacity Management: Not Supported 00:14:52.066 Delete Endurance Group: Not Supported 00:14:52.066 Delete NVM Set: Not Supported 00:14:52.066 Extended LBA Formats Supported: Not Supported 00:14:52.066 Flexible Data Placement Supported: Not Supported 00:14:52.066 00:14:52.066 Controller Memory Buffer Support 00:14:52.066 ================================ 00:14:52.066 Supported: No 00:14:52.066 00:14:52.066 Persistent Memory Region Support 00:14:52.066 ================================ 00:14:52.066 Supported: No 00:14:52.066 00:14:52.066 Admin Command Set Attributes 00:14:52.066 ============================ 00:14:52.066 Security Send/Receive: Not Supported 00:14:52.066 Format NVM: Not Supported 00:14:52.066 Firmware Activate/Download: Not Supported 00:14:52.066 Namespace Management: Not Supported 00:14:52.066 Device Self-Test: Not Supported 00:14:52.066 Directives: Not Supported 00:14:52.066 NVMe-MI: Not Supported 00:14:52.066 Virtualization Management: Not Supported 00:14:52.066 Doorbell Buffer Config: Not Supported 00:14:52.066 Get LBA Status Capability: Not Supported 00:14:52.066 Command & Feature Lockdown Capability: Not Supported 00:14:52.066 Abort Command Limit: 4 00:14:52.066 Async Event Request Limit: 4 00:14:52.066 Number of Firmware Slots: N/A 00:14:52.066 Firmware Slot 1 Read-Only: N/A 00:14:52.066 Firmware Activation Without Reset: N/A 00:14:52.066 Multiple Update Detection Support: N/A 00:14:52.066 Firmware Update Granularity: No Information Provided 00:14:52.066 Per-Namespace SMART Log: No 00:14:52.066 Asymmetric Namespace Access Log Page: Not Supported 00:14:52.066 Subsystem NQN: nqn.2019-07.io.spdk:cnode1 00:14:52.066 Command Effects Log Page: Supported 00:14:52.066 Get Log Page Extended Data: Supported 00:14:52.066 Telemetry Log Pages: Not Supported 00:14:52.066 Persistent Event Log Pages: Not Supported 00:14:52.066 Supported Log Pages Log Page: May Support 00:14:52.066 Commands Supported & Effects Log Page: Not Supported 00:14:52.066 Feature Identifiers & Effects Log Page:May Support 00:14:52.066 NVMe-MI Commands & Effects Log Page: May Support 00:14:52.066 Data Area 4 for Telemetry Log: Not Supported 00:14:52.066 Error Log Page Entries Supported: 128 00:14:52.066 Keep Alive: Supported 00:14:52.066 Keep Alive Granularity: 10000 ms 00:14:52.066 00:14:52.066 NVM Command Set Attributes 00:14:52.066 ========================== 00:14:52.066 Submission Queue Entry Size 00:14:52.066 Max: 64 00:14:52.066 Min: 64 00:14:52.066 Completion Queue Entry Size 00:14:52.066 Max: 16 00:14:52.066 Min: 16 00:14:52.066 Number of Namespaces: 32 00:14:52.066 Compare Command: Supported 00:14:52.066 Write Uncorrectable Command: Not Supported 00:14:52.066 Dataset Management Command: Supported 00:14:52.066 Write Zeroes Command: Supported 00:14:52.066 Set Features Save Field: Not Supported 00:14:52.066 Reservations: Not Supported 00:14:52.066 Timestamp: Not Supported 00:14:52.066 Copy: Supported 00:14:52.066 Volatile Write Cache: Present 00:14:52.066 Atomic Write Unit (Normal): 1 00:14:52.066 Atomic Write Unit (PFail): 1 00:14:52.066 Atomic Compare & Write Unit: 1 00:14:52.066 Fused Compare & Write: Supported 00:14:52.066 Scatter-Gather List 00:14:52.066 SGL Command Set: Supported (Dword aligned) 00:14:52.066 SGL Keyed: Not Supported 00:14:52.066 SGL Bit Bucket Descriptor: Not Supported 00:14:52.066 SGL Metadata Pointer: Not Supported 00:14:52.066 Oversized SGL: Not Supported 00:14:52.066 SGL Metadata Address: Not Supported 00:14:52.066 SGL Offset: Not Supported 00:14:52.066 Transport SGL Data Block: Not Supported 00:14:52.066 Replay Protected Memory Block: Not Supported 00:14:52.066 00:14:52.066 Firmware Slot Information 00:14:52.066 ========================= 00:14:52.066 Active slot: 1 00:14:52.066 Slot 1 Firmware Revision: 24.09 00:14:52.066 00:14:52.066 00:14:52.066 Commands Supported and Effects 00:14:52.066 ============================== 00:14:52.066 Admin Commands 00:14:52.066 -------------- 00:14:52.066 Get Log Page (02h): Supported 00:14:52.066 Identify (06h): Supported 00:14:52.066 Abort (08h): Supported 00:14:52.066 Set Features (09h): Supported 00:14:52.066 Get Features (0Ah): Supported 00:14:52.066 Asynchronous Event Request (0Ch): Supported 00:14:52.066 Keep Alive (18h): Supported 00:14:52.066 I/O Commands 00:14:52.066 ------------ 00:14:52.066 Flush (00h): Supported LBA-Change 00:14:52.066 Write (01h): Supported LBA-Change 00:14:52.066 Read (02h): Supported 00:14:52.066 Compare (05h): Supported 00:14:52.066 Write Zeroes (08h): Supported LBA-Change 00:14:52.066 Dataset Management (09h): Supported LBA-Change 00:14:52.066 Copy (19h): Supported LBA-Change 00:14:52.066 00:14:52.066 Error Log 00:14:52.066 ========= 00:14:52.066 00:14:52.066 Arbitration 00:14:52.066 =========== 00:14:52.066 Arbitration Burst: 1 00:14:52.066 00:14:52.066 Power Management 00:14:52.066 ================ 00:14:52.066 Number of Power States: 1 00:14:52.066 Current Power State: Power State #0 00:14:52.066 Power State #0: 00:14:52.066 Max Power: 0.00 W 00:14:52.066 Non-Operational State: Operational 00:14:52.066 Entry Latency: Not Reported 00:14:52.066 Exit Latency: Not Reported 00:14:52.066 Relative Read Throughput: 0 00:14:52.066 Relative Read Latency: 0 00:14:52.066 Relative Write Throughput: 0 00:14:52.066 Relative Write Latency: 0 00:14:52.066 Idle Power: Not Reported 00:14:52.066 Active Power: Not Reported 00:14:52.066 Non-Operational Permissive Mode: Not Supported 00:14:52.066 00:14:52.066 Health Information 00:14:52.066 ================== 00:14:52.066 Critical Warnings: 00:14:52.066 Available Spare Space: OK 00:14:52.066 Temperature: OK 00:14:52.066 Device Reliability: OK 00:14:52.066 Read Only: No 00:14:52.066 Volatile Memory Backup: OK 00:14:52.066 Current Temperature: 0 Kelvin (-273 Celsius) 00:14:52.066 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:14:52.066 Available Spare: 0% 00:14:52.066 Available Sp[2024-07-25 12:00:39.270485] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:14:52.066 [2024-07-25 12:00:39.270496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:14:52.066 [2024-07-25 12:00:39.270519] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] Prepare to destruct SSD 00:14:52.066 [2024-07-25 12:00:39.270528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.066 [2024-07-25 12:00:39.270535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.066 [2024-07-25 12:00:39.270541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.066 [2024-07-25 12:00:39.270546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:14:52.066 [2024-07-25 12:00:39.274050] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x460001 00:14:52.066 [2024-07-25 12:00:39.274061] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x14, value 0x464001 00:14:52.066 [2024-07-25 12:00:39.274673] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:52.066 [2024-07-25 12:00:39.274719] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] RTD3E = 0 us 00:14:52.066 [2024-07-25 12:00:39.274725] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown timeout = 10000 ms 00:14:52.066 [2024-07-25 12:00:39.275681] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user1/1: offset 0x1c, value 0x9 00:14:52.066 [2024-07-25 12:00:39.275692] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user1/1] shutdown complete in 0 milliseconds 00:14:52.066 [2024-07-25 12:00:39.275741] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user1/1/cntrl 00:14:52.066 [2024-07-25 12:00:39.277727] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:14:52.326 are Threshold: 0% 00:14:52.326 Life Percentage Used: 0% 00:14:52.326 Data Units Read: 0 00:14:52.326 Data Units Written: 0 00:14:52.326 Host Read Commands: 0 00:14:52.326 Host Write Commands: 0 00:14:52.326 Controller Busy Time: 0 minutes 00:14:52.326 Power Cycles: 0 00:14:52.326 Power On Hours: 0 hours 00:14:52.326 Unsafe Shutdowns: 0 00:14:52.326 Unrecoverable Media Errors: 0 00:14:52.326 Lifetime Error Log Entries: 0 00:14:52.326 Warning Temperature Time: 0 minutes 00:14:52.326 Critical Temperature Time: 0 minutes 00:14:52.326 00:14:52.326 Number of Queues 00:14:52.326 ================ 00:14:52.326 Number of I/O Submission Queues: 127 00:14:52.326 Number of I/O Completion Queues: 127 00:14:52.326 00:14:52.326 Active Namespaces 00:14:52.326 ================= 00:14:52.326 Namespace ID:1 00:14:52.326 Error Recovery Timeout: Unlimited 00:14:52.326 Command Set Identifier: NVM (00h) 00:14:52.326 Deallocate: Supported 00:14:52.326 Deallocated/Unwritten Error: Not Supported 00:14:52.326 Deallocated Read Value: Unknown 00:14:52.326 Deallocate in Write Zeroes: Not Supported 00:14:52.326 Deallocated Guard Field: 0xFFFF 00:14:52.326 Flush: Supported 00:14:52.326 Reservation: Supported 00:14:52.326 Namespace Sharing Capabilities: Multiple Controllers 00:14:52.326 Size (in LBAs): 131072 (0GiB) 00:14:52.326 Capacity (in LBAs): 131072 (0GiB) 00:14:52.326 Utilization (in LBAs): 131072 (0GiB) 00:14:52.326 NGUID: C376157790A241E7B4F262802BD73461 00:14:52.326 UUID: c3761577-90a2-41e7-b4f2-62802bd73461 00:14:52.326 Thin Provisioning: Not Supported 00:14:52.326 Per-NS Atomic Units: Yes 00:14:52.326 Atomic Boundary Size (Normal): 0 00:14:52.326 Atomic Boundary Size (PFail): 0 00:14:52.326 Atomic Boundary Offset: 0 00:14:52.326 Maximum Single Source Range Length: 65535 00:14:52.326 Maximum Copy Length: 65535 00:14:52.326 Maximum Source Range Count: 1 00:14:52.326 NGUID/EUI64 Never Reused: No 00:14:52.326 Namespace Write Protected: No 00:14:52.326 Number of LBA Formats: 1 00:14:52.326 Current LBA Format: LBA Format #00 00:14:52.326 LBA Format #00: Data Size: 512 Metadata Size: 0 00:14:52.326 00:14:52.326 12:00:39 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:14:52.326 EAL: No free 2048 kB hugepages reported on node 1 00:14:52.326 [2024-07-25 12:00:39.491810] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:14:57.588 Initializing NVMe Controllers 00:14:57.588 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:14:57.588 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:14:57.588 Initialization complete. Launching workers. 00:14:57.588 ======================================================== 00:14:57.588 Latency(us) 00:14:57.588 Device Information : IOPS MiB/s Average min max 00:14:57.589 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 39916.89 155.93 3206.25 972.03 6667.91 00:14:57.589 ======================================================== 00:14:57.589 Total : 39916.89 155.93 3206.25 972.03 6667.91 00:14:57.589 00:14:57.589 [2024-07-25 12:00:44.510177] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:14:57.589 12:00:44 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:14:57.589 EAL: No free 2048 kB hugepages reported on node 1 00:14:57.589 [2024-07-25 12:00:44.736220] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:02.887 Initializing NVMe Controllers 00:15:02.887 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:02.887 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 with lcore 1 00:15:02.887 Initialization complete. Launching workers. 00:15:02.887 ======================================================== 00:15:02.887 Latency(us) 00:15:02.887 Device Information : IOPS MiB/s Average min max 00:15:02.887 VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) NSID 1 from core 1: 16021.44 62.58 7988.61 5988.33 15493.17 00:15:02.887 ======================================================== 00:15:02.887 Total : 16021.44 62.58 7988.61 5988.33 15493.17 00:15:02.887 00:15:02.887 [2024-07-25 12:00:49.773074] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:02.887 12:00:49 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:02.887 EAL: No free 2048 kB hugepages reported on node 1 00:15:02.887 [2024-07-25 12:00:49.962046] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:08.163 [2024-07-25 12:00:55.064571] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:08.163 Initializing NVMe Controllers 00:15:08.163 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:08.163 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user1/1:: nqn.2019-07.io.spdk:cnode1 00:15:08.163 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 1 00:15:08.163 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 2 00:15:08.163 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user1/1) with lcore 3 00:15:08.163 Initialization complete. Launching workers. 00:15:08.163 Starting thread on core 2 00:15:08.163 Starting thread on core 3 00:15:08.163 Starting thread on core 1 00:15:08.163 12:00:55 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -d 256 -g 00:15:08.163 EAL: No free 2048 kB hugepages reported on node 1 00:15:08.163 [2024-07-25 12:00:55.344420] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.452 [2024-07-25 12:00:58.415280] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.452 Initializing NVMe Controllers 00:15:11.452 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.452 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.452 Associating SPDK bdev Controller (SPDK1 ) with lcore 0 00:15:11.452 Associating SPDK bdev Controller (SPDK1 ) with lcore 1 00:15:11.452 Associating SPDK bdev Controller (SPDK1 ) with lcore 2 00:15:11.452 Associating SPDK bdev Controller (SPDK1 ) with lcore 3 00:15:11.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:11.452 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:11.452 Initialization complete. Launching workers. 00:15:11.452 Starting thread on core 1 with urgent priority queue 00:15:11.452 Starting thread on core 2 with urgent priority queue 00:15:11.452 Starting thread on core 3 with urgent priority queue 00:15:11.452 Starting thread on core 0 with urgent priority queue 00:15:11.452 SPDK bdev Controller (SPDK1 ) core 0: 8379.67 IO/s 11.93 secs/100000 ios 00:15:11.452 SPDK bdev Controller (SPDK1 ) core 1: 7589.67 IO/s 13.18 secs/100000 ios 00:15:11.452 SPDK bdev Controller (SPDK1 ) core 2: 9436.33 IO/s 10.60 secs/100000 ios 00:15:11.452 SPDK bdev Controller (SPDK1 ) core 3: 8438.67 IO/s 11.85 secs/100000 ios 00:15:11.452 ======================================================== 00:15:11.452 00:15:11.452 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:11.452 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.452 [2024-07-25 12:00:58.678485] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:11.712 Initializing NVMe Controllers 00:15:11.712 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.712 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:11.712 Namespace ID: 1 size: 0GB 00:15:11.712 Initialization complete. 00:15:11.712 INFO: using host memory buffer for IO 00:15:11.712 Hello world! 00:15:11.712 [2024-07-25 12:00:58.711697] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:11.712 12:00:58 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' 00:15:11.712 EAL: No free 2048 kB hugepages reported on node 1 00:15:11.971 [2024-07-25 12:00:58.981034] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:12.918 Initializing NVMe Controllers 00:15:12.918 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:12.918 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:12.918 Initialization complete. Launching workers. 00:15:12.918 submit (in ns) avg, min, max = 7418.7, 3236.5, 4000835.7 00:15:12.918 complete (in ns) avg, min, max = 21575.7, 1765.2, 4994679.1 00:15:12.918 00:15:12.918 Submit histogram 00:15:12.918 ================ 00:15:12.918 Range in us Cumulative Count 00:15:12.918 3.228 - 3.242: 0.0123% ( 2) 00:15:12.918 3.242 - 3.256: 0.0432% ( 5) 00:15:12.918 3.256 - 3.270: 0.0740% ( 5) 00:15:12.918 3.270 - 3.283: 0.1233% ( 8) 00:15:12.918 3.283 - 3.297: 0.2158% ( 15) 00:15:12.918 3.297 - 3.311: 0.8572% ( 104) 00:15:12.918 3.311 - 3.325: 3.6386% ( 451) 00:15:12.918 3.325 - 3.339: 8.4120% ( 774) 00:15:12.918 3.339 - 3.353: 13.8020% ( 874) 00:15:12.918 3.353 - 3.367: 19.9013% ( 989) 00:15:12.918 3.367 - 3.381: 26.0376% ( 995) 00:15:12.918 3.381 - 3.395: 31.4585% ( 879) 00:15:12.918 3.395 - 3.409: 37.0028% ( 899) 00:15:12.918 3.409 - 3.423: 42.5470% ( 899) 00:15:12.918 3.423 - 3.437: 47.4252% ( 791) 00:15:12.918 3.437 - 3.450: 51.3352% ( 634) 00:15:12.918 3.450 - 3.464: 56.4909% ( 836) 00:15:12.918 3.464 - 3.478: 63.1021% ( 1072) 00:15:12.918 3.478 - 3.492: 68.0604% ( 804) 00:15:12.918 3.492 - 3.506: 72.3466% ( 695) 00:15:12.918 3.506 - 3.520: 77.7798% ( 881) 00:15:12.918 3.520 - 3.534: 81.3938% ( 586) 00:15:12.918 3.534 - 3.548: 83.8976% ( 406) 00:15:12.918 3.548 - 3.562: 85.3901% ( 242) 00:15:12.918 3.562 - 3.590: 86.8949% ( 244) 00:15:12.918 3.590 - 3.617: 87.8323% ( 152) 00:15:12.918 3.617 - 3.645: 89.3802% ( 251) 00:15:12.918 3.645 - 3.673: 91.1193% ( 282) 00:15:12.918 3.673 - 3.701: 92.6735% ( 252) 00:15:12.918 3.701 - 3.729: 94.5359% ( 302) 00:15:12.918 3.729 - 3.757: 96.2627% ( 280) 00:15:12.918 3.757 - 3.784: 97.6072% ( 218) 00:15:12.918 3.784 - 3.812: 98.4952% ( 144) 00:15:12.918 3.812 - 3.840: 99.0009% ( 82) 00:15:12.918 3.840 - 3.868: 99.3278% ( 53) 00:15:12.918 3.868 - 3.896: 99.5066% ( 29) 00:15:12.918 3.896 - 3.923: 99.5930% ( 14) 00:15:12.918 3.923 - 3.951: 99.5991% ( 1) 00:15:12.918 3.951 - 3.979: 99.6176% ( 3) 00:15:12.918 4.063 - 4.090: 99.6238% ( 1) 00:15:12.918 5.231 - 5.259: 99.6300% ( 1) 00:15:12.918 5.510 - 5.537: 99.6361% ( 1) 00:15:12.918 5.871 - 5.899: 99.6423% ( 1) 00:15:12.918 6.038 - 6.066: 99.6485% ( 1) 00:15:12.918 6.066 - 6.094: 99.6546% ( 1) 00:15:12.918 6.205 - 6.233: 99.6608% ( 1) 00:15:12.918 6.261 - 6.289: 99.6670% ( 1) 00:15:12.918 6.372 - 6.400: 99.6731% ( 1) 00:15:12.918 6.456 - 6.483: 99.6793% ( 1) 00:15:12.918 6.567 - 6.595: 99.6855% ( 1) 00:15:12.918 6.650 - 6.678: 99.6916% ( 1) 00:15:12.918 6.678 - 6.706: 99.6978% ( 1) 00:15:12.918 6.762 - 6.790: 99.7040% ( 1) 00:15:12.918 6.790 - 6.817: 99.7101% ( 1) 00:15:12.918 6.845 - 6.873: 99.7225% ( 2) 00:15:12.918 6.957 - 6.984: 99.7286% ( 1) 00:15:12.918 6.984 - 7.012: 99.7348% ( 1) 00:15:12.918 7.012 - 7.040: 99.7533% ( 3) 00:15:12.918 7.040 - 7.068: 99.7595% ( 1) 00:15:12.918 7.179 - 7.235: 99.7656% ( 1) 00:15:12.918 7.346 - 7.402: 99.7780% ( 2) 00:15:12.918 7.402 - 7.457: 99.7903% ( 2) 00:15:12.918 7.513 - 7.569: 99.7965% ( 1) 00:15:12.918 7.569 - 7.624: 99.8088% ( 2) 00:15:12.918 7.624 - 7.680: 99.8150% ( 1) 00:15:12.918 7.680 - 7.736: 99.8273% ( 2) 00:15:12.918 7.736 - 7.791: 99.8335% ( 1) 00:15:12.918 7.903 - 7.958: 99.8397% ( 1) 00:15:12.918 7.958 - 8.014: 99.8458% ( 1) 00:15:12.918 8.070 - 8.125: 99.8520% ( 1) 00:15:12.918 8.181 - 8.237: 99.8582% ( 1) 00:15:12.918 8.515 - 8.570: 99.8705% ( 2) 00:15:12.918 8.626 - 8.682: 99.8767% ( 1) 00:15:12.918 8.737 - 8.793: 99.8828% ( 1) 00:15:12.918 9.127 - 9.183: 99.8890% ( 1) 00:15:12.918 14.080 - 14.136: 99.8952% ( 1) 00:15:12.918 16.028 - 16.139: 99.9013% ( 1) 00:15:12.918 3989.148 - 4017.642: 100.0000% ( 16) 00:15:12.918 00:15:12.918 Complete histogram 00:15:12.918 ================== 00:15:12.918 Ra[2024-07-25 12:01:00.001034] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:12.918 nge in us Cumulative Count 00:15:12.918 1.760 - 1.767: 0.0062% ( 1) 00:15:12.918 1.774 - 1.781: 0.0123% ( 1) 00:15:12.918 1.781 - 1.795: 0.0247% ( 2) 00:15:12.918 1.795 - 1.809: 0.0925% ( 11) 00:15:12.918 1.809 - 1.823: 2.0413% ( 316) 00:15:12.918 1.823 - 1.837: 5.0632% ( 490) 00:15:12.918 1.837 - 1.850: 6.3953% ( 216) 00:15:12.918 1.850 - 1.864: 8.9793% ( 419) 00:15:12.918 1.864 - 1.878: 46.7715% ( 6128) 00:15:12.918 1.878 - 1.892: 86.1856% ( 6391) 00:15:12.918 1.892 - 1.906: 92.4699% ( 1019) 00:15:12.918 1.906 - 1.920: 95.6275% ( 512) 00:15:12.918 1.920 - 1.934: 96.5279% ( 146) 00:15:12.918 1.934 - 1.948: 97.5393% ( 164) 00:15:12.918 1.948 - 1.962: 98.5446% ( 163) 00:15:12.918 1.962 - 1.976: 98.9886% ( 72) 00:15:12.918 1.976 - 1.990: 99.1304% ( 23) 00:15:12.918 1.990 - 2.003: 99.2291% ( 16) 00:15:12.918 2.003 - 2.017: 99.2661% ( 6) 00:15:12.918 2.017 - 2.031: 99.2846% ( 3) 00:15:12.918 2.031 - 2.045: 99.2908% ( 1) 00:15:12.918 2.045 - 2.059: 99.2969% ( 1) 00:15:12.918 2.059 - 2.073: 99.3031% ( 1) 00:15:12.918 2.087 - 2.101: 99.3093% ( 1) 00:15:12.918 2.393 - 2.407: 99.3154% ( 1) 00:15:12.918 2.421 - 2.435: 99.3216% ( 1) 00:15:12.918 2.588 - 2.602: 99.3278% ( 1) 00:15:12.918 3.812 - 3.840: 99.3340% ( 1) 00:15:12.918 4.035 - 4.063: 99.3401% ( 1) 00:15:12.918 4.146 - 4.174: 99.3463% ( 1) 00:15:12.918 4.313 - 4.341: 99.3525% ( 1) 00:15:12.918 4.369 - 4.397: 99.3586% ( 1) 00:15:12.918 4.397 - 4.424: 99.3648% ( 1) 00:15:12.918 4.814 - 4.842: 99.3710% ( 1) 00:15:12.918 4.897 - 4.925: 99.3771% ( 1) 00:15:12.918 5.120 - 5.148: 99.3833% ( 1) 00:15:12.918 5.148 - 5.176: 99.3895% ( 1) 00:15:12.918 5.231 - 5.259: 99.3956% ( 1) 00:15:12.918 5.510 - 5.537: 99.4018% ( 1) 00:15:12.918 5.537 - 5.565: 99.4080% ( 1) 00:15:12.918 5.649 - 5.677: 99.4141% ( 1) 00:15:12.918 5.732 - 5.760: 99.4203% ( 1) 00:15:12.918 6.150 - 6.177: 99.4265% ( 1) 00:15:12.918 6.317 - 6.344: 99.4326% ( 1) 00:15:12.918 6.456 - 6.483: 99.4388% ( 1) 00:15:12.919 6.650 - 6.678: 99.4450% ( 1) 00:15:12.919 7.096 - 7.123: 99.4511% ( 1) 00:15:12.919 7.179 - 7.235: 99.4573% ( 1) 00:15:12.919 7.346 - 7.402: 99.4635% ( 1) 00:15:12.919 8.237 - 8.292: 99.4758% ( 2) 00:15:12.919 8.403 - 8.459: 99.4820% ( 1) 00:15:12.919 11.965 - 12.021: 99.4881% ( 1) 00:15:12.919 12.243 - 12.299: 99.4943% ( 1) 00:15:12.919 12.967 - 13.023: 99.5005% ( 1) 00:15:12.919 36.063 - 36.285: 99.5066% ( 1) 00:15:12.919 2592.946 - 2607.193: 99.5128% ( 1) 00:15:12.919 3989.148 - 4017.642: 99.9938% ( 78) 00:15:12.919 4986.435 - 5014.929: 100.0000% ( 1) 00:15:12.919 00:15:12.919 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user1/1 nqn.2019-07.io.spdk:cnode1 1 00:15:12.919 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user1/1 00:15:12.919 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode1 00:15:12.919 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc3 00:15:12.919 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:13.178 [ 00:15:13.178 { 00:15:13.178 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:13.178 "subtype": "Discovery", 00:15:13.178 "listen_addresses": [], 00:15:13.178 "allow_any_host": true, 00:15:13.178 "hosts": [] 00:15:13.178 }, 00:15:13.178 { 00:15:13.178 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:13.178 "subtype": "NVMe", 00:15:13.178 "listen_addresses": [ 00:15:13.178 { 00:15:13.178 "trtype": "VFIOUSER", 00:15:13.178 "adrfam": "IPv4", 00:15:13.178 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:13.178 "trsvcid": "0" 00:15:13.178 } 00:15:13.178 ], 00:15:13.178 "allow_any_host": true, 00:15:13.178 "hosts": [], 00:15:13.178 "serial_number": "SPDK1", 00:15:13.178 "model_number": "SPDK bdev Controller", 00:15:13.178 "max_namespaces": 32, 00:15:13.178 "min_cntlid": 1, 00:15:13.178 "max_cntlid": 65519, 00:15:13.178 "namespaces": [ 00:15:13.178 { 00:15:13.178 "nsid": 1, 00:15:13.178 "bdev_name": "Malloc1", 00:15:13.178 "name": "Malloc1", 00:15:13.178 "nguid": "C376157790A241E7B4F262802BD73461", 00:15:13.178 "uuid": "c3761577-90a2-41e7-b4f2-62802bd73461" 00:15:13.178 } 00:15:13.178 ] 00:15:13.178 }, 00:15:13.178 { 00:15:13.178 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:13.178 "subtype": "NVMe", 00:15:13.178 "listen_addresses": [ 00:15:13.178 { 00:15:13.178 "trtype": "VFIOUSER", 00:15:13.178 "adrfam": "IPv4", 00:15:13.178 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:13.178 "trsvcid": "0" 00:15:13.178 } 00:15:13.178 ], 00:15:13.178 "allow_any_host": true, 00:15:13.178 "hosts": [], 00:15:13.178 "serial_number": "SPDK2", 00:15:13.178 "model_number": "SPDK bdev Controller", 00:15:13.178 "max_namespaces": 32, 00:15:13.178 "min_cntlid": 1, 00:15:13.178 "max_cntlid": 65519, 00:15:13.178 "namespaces": [ 00:15:13.178 { 00:15:13.178 "nsid": 1, 00:15:13.178 "bdev_name": "Malloc2", 00:15:13.178 "name": "Malloc2", 00:15:13.178 "nguid": "2F3D32D328EC49ACA7A473356675261D", 00:15:13.178 "uuid": "2f3d32d3-28ec-49ac-a7a4-73356675261d" 00:15:13.178 } 00:15:13.178 ] 00:15:13.178 } 00:15:13.178 ] 00:15:13.179 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:13.179 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=308743 00:15:13.179 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:13.179 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user1/1 subnqn:nqn.2019-07.io.spdk:cnode1' -n 2 -g -t /tmp/aer_touch_file 00:15:13.179 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:13.179 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:13.179 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:13.179 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:13.179 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:13.179 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc3 00:15:13.179 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.179 [2024-07-25 12:01:00.383554] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: enabling controller 00:15:13.179 Malloc3 00:15:13.438 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc3 -n 2 00:15:13.438 [2024-07-25 12:01:00.616396] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user1/1: disabling controller 00:15:13.438 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:13.438 Asynchronous Event Request test 00:15:13.438 Attaching to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.438 Attached to /var/run/vfio-user/domain/vfio-user1/1 00:15:13.438 Registering asynchronous event callbacks... 00:15:13.438 Starting namespace attribute notice tests for all controllers... 00:15:13.438 /var/run/vfio-user/domain/vfio-user1/1: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:13.438 aer_cb - Changed Namespace 00:15:13.438 Cleaning up... 00:15:13.698 [ 00:15:13.698 { 00:15:13.698 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:13.698 "subtype": "Discovery", 00:15:13.698 "listen_addresses": [], 00:15:13.698 "allow_any_host": true, 00:15:13.698 "hosts": [] 00:15:13.698 }, 00:15:13.698 { 00:15:13.698 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:13.698 "subtype": "NVMe", 00:15:13.698 "listen_addresses": [ 00:15:13.698 { 00:15:13.698 "trtype": "VFIOUSER", 00:15:13.698 "adrfam": "IPv4", 00:15:13.698 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:13.698 "trsvcid": "0" 00:15:13.698 } 00:15:13.698 ], 00:15:13.698 "allow_any_host": true, 00:15:13.698 "hosts": [], 00:15:13.698 "serial_number": "SPDK1", 00:15:13.698 "model_number": "SPDK bdev Controller", 00:15:13.698 "max_namespaces": 32, 00:15:13.698 "min_cntlid": 1, 00:15:13.698 "max_cntlid": 65519, 00:15:13.698 "namespaces": [ 00:15:13.698 { 00:15:13.698 "nsid": 1, 00:15:13.698 "bdev_name": "Malloc1", 00:15:13.698 "name": "Malloc1", 00:15:13.698 "nguid": "C376157790A241E7B4F262802BD73461", 00:15:13.698 "uuid": "c3761577-90a2-41e7-b4f2-62802bd73461" 00:15:13.698 }, 00:15:13.698 { 00:15:13.698 "nsid": 2, 00:15:13.698 "bdev_name": "Malloc3", 00:15:13.698 "name": "Malloc3", 00:15:13.698 "nguid": "19DB1C01B81649FF9AD36AFFBF508CD7", 00:15:13.698 "uuid": "19db1c01-b816-49ff-9ad3-6affbf508cd7" 00:15:13.698 } 00:15:13.698 ] 00:15:13.698 }, 00:15:13.698 { 00:15:13.698 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:13.698 "subtype": "NVMe", 00:15:13.698 "listen_addresses": [ 00:15:13.698 { 00:15:13.698 "trtype": "VFIOUSER", 00:15:13.698 "adrfam": "IPv4", 00:15:13.698 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:13.698 "trsvcid": "0" 00:15:13.698 } 00:15:13.698 ], 00:15:13.698 "allow_any_host": true, 00:15:13.698 "hosts": [], 00:15:13.698 "serial_number": "SPDK2", 00:15:13.698 "model_number": "SPDK bdev Controller", 00:15:13.698 "max_namespaces": 32, 00:15:13.698 "min_cntlid": 1, 00:15:13.698 "max_cntlid": 65519, 00:15:13.698 "namespaces": [ 00:15:13.698 { 00:15:13.698 "nsid": 1, 00:15:13.698 "bdev_name": "Malloc2", 00:15:13.698 "name": "Malloc2", 00:15:13.698 "nguid": "2F3D32D328EC49ACA7A473356675261D", 00:15:13.698 "uuid": "2f3d32d3-28ec-49ac-a7a4-73356675261d" 00:15:13.698 } 00:15:13.698 ] 00:15:13.698 } 00:15:13.698 ] 00:15:13.698 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 308743 00:15:13.698 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@80 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:13.698 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@81 -- # test_traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:13.698 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@82 -- # test_subnqn=nqn.2019-07.io.spdk:cnode2 00:15:13.698 12:01:00 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@83 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -L nvme -L nvme_vfio -L vfio_pci 00:15:13.698 [2024-07-25 12:01:00.855180] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:15:13.698 [2024-07-25 12:01:00.855213] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --single-file-segments --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid308927 ] 00:15:13.698 EAL: No free 2048 kB hugepages reported on node 1 00:15:13.698 [2024-07-25 12:01:00.883467] nvme_vfio_user.c: 259:nvme_vfio_ctrlr_scan: *DEBUG*: Scan controller : /var/run/vfio-user/domain/vfio-user2/2 00:15:13.698 [2024-07-25 12:01:00.891262] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 0, Size 0x2000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:13.698 [2024-07-25 12:01:00.891283] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0x1000, Offset 0x1000, Map addr 0x7ff4766c9000 00:15:13.698 [2024-07-25 12:01:00.892264] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 1, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:13.698 [2024-07-25 12:01:00.893268] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 2, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:13.698 [2024-07-25 12:01:00.894276] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 3, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:13.698 [2024-07-25 12:01:00.895281] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 4, Size 0x2000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:13.698 [2024-07-25 12:01:00.896291] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 5, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:13.698 [2024-07-25 12:01:00.897300] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 6, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:13.699 [2024-07-25 12:01:00.898307] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 7, Size 0x1000, Offset 0x0, Flags 0x3, Cap offset 0 00:15:13.699 [2024-07-25 12:01:00.899312] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 8, Size 0x0, Offset 0x0, Flags 0x0, Cap offset 0 00:15:13.699 [2024-07-25 12:01:00.900324] vfio_user_pci.c: 304:vfio_device_map_bars_and_config_region: *DEBUG*: Bar 9, Size 0xc000, Offset 0x0, Flags 0xf, Cap offset 32 00:15:13.699 [2024-07-25 12:01:00.900333] vfio_user_pci.c: 233:vfio_device_setup_sparse_mmaps: *DEBUG*: Sparse region 0, Size 0xb000, Offset 0x1000, Map addr 0x7ff4766be000 00:15:13.699 [2024-07-25 12:01:00.901274] vfio_user_pci.c: 65:vfio_add_mr: *DEBUG*: Add memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:13.699 [2024-07-25 12:01:00.914793] vfio_user_pci.c: 386:spdk_vfio_user_setup: *DEBUG*: Device vfio-user0, Path /var/run/vfio-user/domain/vfio-user2/2/cntrl Setup Successfully 00:15:13.699 [2024-07-25 12:01:00.914816] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to connect adminq (no timeout) 00:15:13.699 [2024-07-25 12:01:00.916875] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:13.699 [2024-07-25 12:01:00.916915] nvme_pcie_common.c: 133:nvme_pcie_qpair_construct: *INFO*: max_completions_cap = 64 num_trackers = 192 00:15:13.699 [2024-07-25 12:01:00.916990] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for connect adminq (no timeout) 00:15:13.699 [2024-07-25 12:01:00.917005] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs (no timeout) 00:15:13.699 [2024-07-25 12:01:00.917010] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read vs wait for vs (no timeout) 00:15:13.699 [2024-07-25 12:01:00.917877] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x8, value 0x10300 00:15:13.699 [2024-07-25 12:01:00.917889] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap (no timeout) 00:15:13.699 [2024-07-25 12:01:00.917896] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to read cap wait for cap (no timeout) 00:15:13.699 [2024-07-25 12:01:00.918882] nvme_vfio_user.c: 103:nvme_vfio_ctrlr_get_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x0, value 0x201e0100ff 00:15:13.699 [2024-07-25 12:01:00.918891] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en (no timeout) 00:15:13.699 [2024-07-25 12:01:00.918898] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to check en wait for cc (timeout 15000 ms) 00:15:13.699 [2024-07-25 12:01:00.919886] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x0 00:15:13.699 [2024-07-25 12:01:00.919894] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:15:13.699 [2024-07-25 12:01:00.920892] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x0 00:15:13.699 [2024-07-25 12:01:00.920901] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 0 && CSTS.RDY = 0 00:15:13.699 [2024-07-25 12:01:00.920906] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to controller is disabled (timeout 15000 ms) 00:15:13.699 [2024-07-25 12:01:00.920911] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:15:13.699 [2024-07-25 12:01:00.921017] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Setting CC.EN = 1 00:15:13.699 [2024-07-25 12:01:00.921021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:15:13.699 [2024-07-25 12:01:00.921025] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x28, value 0x2000003c0000 00:15:13.699 [2024-07-25 12:01:00.921901] nvme_vfio_user.c: 61:nvme_vfio_ctrlr_set_reg_8: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x30, value 0x2000003be000 00:15:13.699 [2024-07-25 12:01:00.922911] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x24, value 0xff00ff 00:15:13.699 [2024-07-25 12:01:00.923919] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:13.699 [2024-07-25 12:01:00.924921] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:13.699 [2024-07-25 12:01:00.924958] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:15:13.699 [2024-07-25 12:01:00.925935] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x1 00:15:13.699 [2024-07-25 12:01:00.925944] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:15:13.699 [2024-07-25 12:01:00.925948] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to reset admin queue (timeout 30000 ms) 00:15:13.699 [2024-07-25 12:01:00.925966] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller (no timeout) 00:15:13.699 [2024-07-25 12:01:00.925973] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify controller (timeout 30000 ms) 00:15:13.699 [2024-07-25 12:01:00.925983] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:13.699 [2024-07-25 12:01:00.925988] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:13.699 [2024-07-25 12:01:00.925991] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:13.699 [2024-07-25 12:01:00.926003] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000001 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:13.699 [2024-07-25 12:01:00.932052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0001 p:1 m:0 dnr:0 00:15:13.699 [2024-07-25 12:01:00.932063] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_xfer_size 131072 00:15:13.699 [2024-07-25 12:01:00.932068] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] MDTS max_xfer_size 131072 00:15:13.699 [2024-07-25 12:01:00.932072] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] CNTLID 0x0001 00:15:13.699 [2024-07-25 12:01:00.932076] nvme_ctrlr.c:2071:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Identify CNTLID 0x0001 != Connect CNTLID 0x0000 00:15:13.699 [2024-07-25 12:01:00.932081] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] transport max_sges 1 00:15:13.699 [2024-07-25 12:01:00.932085] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] fuses compare and write: 1 00:15:13.699 [2024-07-25 12:01:00.932089] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to configure AER (timeout 30000 ms) 00:15:13.699 [2024-07-25 12:01:00.932096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for configure aer (timeout 30000 ms) 00:15:13.699 [2024-07-25 12:01:00.932108] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:191 cdw10:0000000b PRP1 0x0 PRP2 0x0 00:15:13.699 [2024-07-25 12:01:00.940047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0002 p:1 m:0 dnr:0 00:15:13.699 [2024-07-25 12:01:00.940061] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.699 [2024-07-25 12:01:00.940069] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.699 [2024-07-25 12:01:00.940077] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.699 [2024-07-25 12:01:00.940084] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:15:13.699 [2024-07-25 12:01:00.940088] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set keep alive timeout (timeout 30000 ms) 00:15:13.699 [2024-07-25 12:01:00.940096] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:15:13.699 [2024-07-25 12:01:00.940106] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:191 cdw10:0000000f PRP1 0x0 PRP2 0x0 00:15:13.961 [2024-07-25 12:01:00.948050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0007 p:1 m:0 dnr:0 00:15:13.961 [2024-07-25 12:01:00.948059] nvme_ctrlr.c:3010:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Controller adjusted keep alive timeout to 0 ms 00:15:13.961 [2024-07-25 12:01:00.948064] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify controller iocs specific (timeout 30000 ms) 00:15:13.961 [2024-07-25 12:01:00.948072] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set number of queues (timeout 30000 ms) 00:15:13.961 [2024-07-25 12:01:00.948078] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for set number of queues (timeout 30000 ms) 00:15:13.961 [2024-07-25 12:01:00.948087] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:13.961 [2024-07-25 12:01:00.956048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:0008 p:1 m:0 dnr:0 00:15:13.961 [2024-07-25 12:01:00.956102] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify active ns (timeout 30000 ms) 00:15:13.961 [2024-07-25 12:01:00.956110] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify active ns (timeout 30000 ms) 00:15:13.961 [2024-07-25 12:01:00.956117] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f9000 len:4096 00:15:13.961 [2024-07-25 12:01:00.956122] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f9000 00:15:13.961 [2024-07-25 12:01:00.956125] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:13.961 [2024-07-25 12:01:00.956131] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:0 cdw10:00000002 cdw11:00000000 PRP1 0x2000002f9000 PRP2 0x0 00:15:13.961 [2024-07-25 12:01:00.964047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0009 p:1 m:0 dnr:0 00:15:13.961 [2024-07-25 12:01:00.964058] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Namespace 1 was added 00:15:13.961 [2024-07-25 12:01:00.964070] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns (timeout 30000 ms) 00:15:13.961 [2024-07-25 12:01:00.964077] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify ns (timeout 30000 ms) 00:15:13.961 [2024-07-25 12:01:00.964084] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:13.961 [2024-07-25 12:01:00.964087] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:13.961 [2024-07-25 12:01:00.964090] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:13.961 [2024-07-25 12:01:00.964096] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000000 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:13.961 [2024-07-25 12:01:00.972048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000a p:1 m:0 dnr:0 00:15:13.961 [2024-07-25 12:01:00.972064] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify namespace id descriptors (timeout 30000 ms) 00:15:13.961 [2024-07-25 12:01:00.972072] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:15:13.961 [2024-07-25 12:01:00.972081] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:4096 00:15:13.961 [2024-07-25 12:01:00.972085] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:13.961 [2024-07-25 12:01:00.972088] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:13.961 [2024-07-25 12:01:00.972093] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:191 nsid:1 cdw10:00000003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:13.961 [2024-07-25 12:01:00.980050] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000b p:1 m:0 dnr:0 00:15:13.961 [2024-07-25 12:01:00.980059] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to identify ns iocs specific (timeout 30000 ms) 00:15:13.961 [2024-07-25 12:01:00.980066] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported log pages (timeout 30000 ms) 00:15:13.961 [2024-07-25 12:01:00.980073] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set supported features (timeout 30000 ms) 00:15:13.961 [2024-07-25 12:01:00.980080] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host behavior support feature (timeout 30000 ms) 00:15:13.961 [2024-07-25 12:01:00.980085] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set doorbell buffer config (timeout 30000 ms) 00:15:13.961 [2024-07-25 12:01:00.980089] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to set host ID (timeout 30000 ms) 00:15:13.961 [2024-07-25 12:01:00.980093] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] NVMe-oF transport - not sending Set Features - Host ID 00:15:13.961 [2024-07-25 12:01:00.980098] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to transport ready (timeout 30000 ms) 00:15:13.961 [2024-07-25 12:01:00.980102] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] setting state to ready (no timeout) 00:15:13.961 [2024-07-25 12:01:00.980117] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:191 cdw10:00000001 PRP1 0x0 PRP2 0x0 00:15:13.961 [2024-07-25 12:01:00.988048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000c p:1 m:0 dnr:0 00:15:13.961 [2024-07-25 12:01:00.988060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:191 cdw10:00000002 PRP1 0x0 PRP2 0x0 00:15:13.961 [2024-07-25 12:01:00.996049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000d p:1 m:0 dnr:0 00:15:13.961 [2024-07-25 12:01:00.996072] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:191 cdw10:00000004 PRP1 0x0 PRP2 0x0 00:15:13.961 [2024-07-25 12:01:01.004048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:000e p:1 m:0 dnr:0 00:15:13.961 [2024-07-25 12:01:01.004060] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:191 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:15:13.961 [2024-07-25 12:01:01.012047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:7e007e sqhd:000f p:1 m:0 dnr:0 00:15:13.962 [2024-07-25 12:01:01.012063] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f6000 len:8192 00:15:13.962 [2024-07-25 12:01:01.012068] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f6000 00:15:13.962 [2024-07-25 12:01:01.012071] nvme_pcie_common.c:1239:nvme_pcie_prp_list_append: *DEBUG*: prp[0] = 0x2000002f7000 00:15:13.962 [2024-07-25 12:01:01.012074] nvme_pcie_common.c:1255:nvme_pcie_prp_list_append: *DEBUG*: prp2 = 0x2000002f7000 00:15:13.962 [2024-07-25 12:01:01.012077] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 2 00:15:13.962 [2024-07-25 12:01:01.012085] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:191 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 PRP1 0x2000002f6000 PRP2 0x2000002f7000 00:15:13.962 [2024-07-25 12:01:01.012092] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fc000 len:512 00:15:13.962 [2024-07-25 12:01:01.012096] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fc000 00:15:13.962 [2024-07-25 12:01:01.012099] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:13.962 [2024-07-25 12:01:01.012104] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:186 nsid:ffffffff cdw10:007f0002 cdw11:00000000 PRP1 0x2000002fc000 PRP2 0x0 00:15:13.962 [2024-07-25 12:01:01.012110] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002fb000 len:512 00:15:13.962 [2024-07-25 12:01:01.012114] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002fb000 00:15:13.962 [2024-07-25 12:01:01.012117] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:13.962 [2024-07-25 12:01:01.012122] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:185 nsid:ffffffff cdw10:007f0003 cdw11:00000000 PRP1 0x2000002fb000 PRP2 0x0 00:15:13.962 [2024-07-25 12:01:01.012129] nvme_pcie_common.c:1202:nvme_pcie_prp_list_append: *DEBUG*: prp_index:0 virt_addr:0x2000002f4000 len:4096 00:15:13.962 [2024-07-25 12:01:01.012133] nvme_pcie_common.c:1230:nvme_pcie_prp_list_append: *DEBUG*: prp1 = 0x2000002f4000 00:15:13.962 [2024-07-25 12:01:01.012136] nvme_pcie_common.c:1290:nvme_pcie_qpair_build_contig_request: *DEBUG*: Number of PRP entries: 1 00:15:13.962 [2024-07-25 12:01:01.012141] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:184 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 PRP1 0x2000002f4000 PRP2 0x0 00:15:13.962 [2024-07-25 12:01:01.020048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:191 cdw0:0 sqhd:0010 p:1 m:0 dnr:0 00:15:13.962 [2024-07-25 12:01:01.020062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:186 cdw0:0 sqhd:0011 p:1 m:0 dnr:0 00:15:13.962 [2024-07-25 12:01:01.020072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:185 cdw0:0 sqhd:0012 p:1 m:0 dnr:0 00:15:13.962 [2024-07-25 12:01:01.020078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0013 p:1 m:0 dnr:0 00:15:13.962 ===================================================== 00:15:13.962 NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:13.962 ===================================================== 00:15:13.962 Controller Capabilities/Features 00:15:13.962 ================================ 00:15:13.962 Vendor ID: 4e58 00:15:13.962 Subsystem Vendor ID: 4e58 00:15:13.962 Serial Number: SPDK2 00:15:13.962 Model Number: SPDK bdev Controller 00:15:13.962 Firmware Version: 24.09 00:15:13.962 Recommended Arb Burst: 6 00:15:13.962 IEEE OUI Identifier: 8d 6b 50 00:15:13.962 Multi-path I/O 00:15:13.962 May have multiple subsystem ports: Yes 00:15:13.962 May have multiple controllers: Yes 00:15:13.962 Associated with SR-IOV VF: No 00:15:13.962 Max Data Transfer Size: 131072 00:15:13.962 Max Number of Namespaces: 32 00:15:13.962 Max Number of I/O Queues: 127 00:15:13.962 NVMe Specification Version (VS): 1.3 00:15:13.962 NVMe Specification Version (Identify): 1.3 00:15:13.962 Maximum Queue Entries: 256 00:15:13.962 Contiguous Queues Required: Yes 00:15:13.962 Arbitration Mechanisms Supported 00:15:13.962 Weighted Round Robin: Not Supported 00:15:13.962 Vendor Specific: Not Supported 00:15:13.962 Reset Timeout: 15000 ms 00:15:13.962 Doorbell Stride: 4 bytes 00:15:13.962 NVM Subsystem Reset: Not Supported 00:15:13.962 Command Sets Supported 00:15:13.962 NVM Command Set: Supported 00:15:13.962 Boot Partition: Not Supported 00:15:13.962 Memory Page Size Minimum: 4096 bytes 00:15:13.962 Memory Page Size Maximum: 4096 bytes 00:15:13.962 Persistent Memory Region: Not Supported 00:15:13.962 Optional Asynchronous Events Supported 00:15:13.962 Namespace Attribute Notices: Supported 00:15:13.962 Firmware Activation Notices: Not Supported 00:15:13.962 ANA Change Notices: Not Supported 00:15:13.962 PLE Aggregate Log Change Notices: Not Supported 00:15:13.962 LBA Status Info Alert Notices: Not Supported 00:15:13.962 EGE Aggregate Log Change Notices: Not Supported 00:15:13.962 Normal NVM Subsystem Shutdown event: Not Supported 00:15:13.962 Zone Descriptor Change Notices: Not Supported 00:15:13.962 Discovery Log Change Notices: Not Supported 00:15:13.962 Controller Attributes 00:15:13.962 128-bit Host Identifier: Supported 00:15:13.962 Non-Operational Permissive Mode: Not Supported 00:15:13.962 NVM Sets: Not Supported 00:15:13.962 Read Recovery Levels: Not Supported 00:15:13.962 Endurance Groups: Not Supported 00:15:13.962 Predictable Latency Mode: Not Supported 00:15:13.962 Traffic Based Keep ALive: Not Supported 00:15:13.962 Namespace Granularity: Not Supported 00:15:13.962 SQ Associations: Not Supported 00:15:13.962 UUID List: Not Supported 00:15:13.962 Multi-Domain Subsystem: Not Supported 00:15:13.962 Fixed Capacity Management: Not Supported 00:15:13.962 Variable Capacity Management: Not Supported 00:15:13.962 Delete Endurance Group: Not Supported 00:15:13.962 Delete NVM Set: Not Supported 00:15:13.962 Extended LBA Formats Supported: Not Supported 00:15:13.962 Flexible Data Placement Supported: Not Supported 00:15:13.962 00:15:13.962 Controller Memory Buffer Support 00:15:13.962 ================================ 00:15:13.962 Supported: No 00:15:13.962 00:15:13.962 Persistent Memory Region Support 00:15:13.962 ================================ 00:15:13.962 Supported: No 00:15:13.962 00:15:13.962 Admin Command Set Attributes 00:15:13.962 ============================ 00:15:13.962 Security Send/Receive: Not Supported 00:15:13.962 Format NVM: Not Supported 00:15:13.962 Firmware Activate/Download: Not Supported 00:15:13.962 Namespace Management: Not Supported 00:15:13.962 Device Self-Test: Not Supported 00:15:13.962 Directives: Not Supported 00:15:13.962 NVMe-MI: Not Supported 00:15:13.962 Virtualization Management: Not Supported 00:15:13.962 Doorbell Buffer Config: Not Supported 00:15:13.962 Get LBA Status Capability: Not Supported 00:15:13.962 Command & Feature Lockdown Capability: Not Supported 00:15:13.962 Abort Command Limit: 4 00:15:13.962 Async Event Request Limit: 4 00:15:13.962 Number of Firmware Slots: N/A 00:15:13.962 Firmware Slot 1 Read-Only: N/A 00:15:13.962 Firmware Activation Without Reset: N/A 00:15:13.962 Multiple Update Detection Support: N/A 00:15:13.962 Firmware Update Granularity: No Information Provided 00:15:13.962 Per-Namespace SMART Log: No 00:15:13.962 Asymmetric Namespace Access Log Page: Not Supported 00:15:13.962 Subsystem NQN: nqn.2019-07.io.spdk:cnode2 00:15:13.962 Command Effects Log Page: Supported 00:15:13.962 Get Log Page Extended Data: Supported 00:15:13.962 Telemetry Log Pages: Not Supported 00:15:13.962 Persistent Event Log Pages: Not Supported 00:15:13.962 Supported Log Pages Log Page: May Support 00:15:13.962 Commands Supported & Effects Log Page: Not Supported 00:15:13.962 Feature Identifiers & Effects Log Page:May Support 00:15:13.962 NVMe-MI Commands & Effects Log Page: May Support 00:15:13.962 Data Area 4 for Telemetry Log: Not Supported 00:15:13.962 Error Log Page Entries Supported: 128 00:15:13.962 Keep Alive: Supported 00:15:13.962 Keep Alive Granularity: 10000 ms 00:15:13.962 00:15:13.962 NVM Command Set Attributes 00:15:13.962 ========================== 00:15:13.962 Submission Queue Entry Size 00:15:13.962 Max: 64 00:15:13.962 Min: 64 00:15:13.962 Completion Queue Entry Size 00:15:13.962 Max: 16 00:15:13.962 Min: 16 00:15:13.962 Number of Namespaces: 32 00:15:13.962 Compare Command: Supported 00:15:13.962 Write Uncorrectable Command: Not Supported 00:15:13.962 Dataset Management Command: Supported 00:15:13.962 Write Zeroes Command: Supported 00:15:13.962 Set Features Save Field: Not Supported 00:15:13.962 Reservations: Not Supported 00:15:13.962 Timestamp: Not Supported 00:15:13.962 Copy: Supported 00:15:13.962 Volatile Write Cache: Present 00:15:13.962 Atomic Write Unit (Normal): 1 00:15:13.962 Atomic Write Unit (PFail): 1 00:15:13.962 Atomic Compare & Write Unit: 1 00:15:13.962 Fused Compare & Write: Supported 00:15:13.962 Scatter-Gather List 00:15:13.962 SGL Command Set: Supported (Dword aligned) 00:15:13.962 SGL Keyed: Not Supported 00:15:13.962 SGL Bit Bucket Descriptor: Not Supported 00:15:13.962 SGL Metadata Pointer: Not Supported 00:15:13.962 Oversized SGL: Not Supported 00:15:13.963 SGL Metadata Address: Not Supported 00:15:13.963 SGL Offset: Not Supported 00:15:13.963 Transport SGL Data Block: Not Supported 00:15:13.963 Replay Protected Memory Block: Not Supported 00:15:13.963 00:15:13.963 Firmware Slot Information 00:15:13.963 ========================= 00:15:13.963 Active slot: 1 00:15:13.963 Slot 1 Firmware Revision: 24.09 00:15:13.963 00:15:13.963 00:15:13.963 Commands Supported and Effects 00:15:13.963 ============================== 00:15:13.963 Admin Commands 00:15:13.963 -------------- 00:15:13.963 Get Log Page (02h): Supported 00:15:13.963 Identify (06h): Supported 00:15:13.963 Abort (08h): Supported 00:15:13.963 Set Features (09h): Supported 00:15:13.963 Get Features (0Ah): Supported 00:15:13.963 Asynchronous Event Request (0Ch): Supported 00:15:13.963 Keep Alive (18h): Supported 00:15:13.963 I/O Commands 00:15:13.963 ------------ 00:15:13.963 Flush (00h): Supported LBA-Change 00:15:13.963 Write (01h): Supported LBA-Change 00:15:13.963 Read (02h): Supported 00:15:13.963 Compare (05h): Supported 00:15:13.963 Write Zeroes (08h): Supported LBA-Change 00:15:13.963 Dataset Management (09h): Supported LBA-Change 00:15:13.963 Copy (19h): Supported LBA-Change 00:15:13.963 00:15:13.963 Error Log 00:15:13.963 ========= 00:15:13.963 00:15:13.963 Arbitration 00:15:13.963 =========== 00:15:13.963 Arbitration Burst: 1 00:15:13.963 00:15:13.963 Power Management 00:15:13.963 ================ 00:15:13.963 Number of Power States: 1 00:15:13.963 Current Power State: Power State #0 00:15:13.963 Power State #0: 00:15:13.963 Max Power: 0.00 W 00:15:13.963 Non-Operational State: Operational 00:15:13.963 Entry Latency: Not Reported 00:15:13.963 Exit Latency: Not Reported 00:15:13.963 Relative Read Throughput: 0 00:15:13.963 Relative Read Latency: 0 00:15:13.963 Relative Write Throughput: 0 00:15:13.963 Relative Write Latency: 0 00:15:13.963 Idle Power: Not Reported 00:15:13.963 Active Power: Not Reported 00:15:13.963 Non-Operational Permissive Mode: Not Supported 00:15:13.963 00:15:13.963 Health Information 00:15:13.963 ================== 00:15:13.963 Critical Warnings: 00:15:13.963 Available Spare Space: OK 00:15:13.963 Temperature: OK 00:15:13.963 Device Reliability: OK 00:15:13.963 Read Only: No 00:15:13.963 Volatile Memory Backup: OK 00:15:13.963 Current Temperature: 0 Kelvin (-273 Celsius) 00:15:13.963 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:15:13.963 Available Spare: 0% 00:15:13.963 Available Sp[2024-07-25 12:01:01.020164] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:184 cdw10:00000005 PRP1 0x0 PRP2 0x0 00:15:13.963 [2024-07-25 12:01:01.028049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: SUCCESS (00/00) qid:0 cid:184 cdw0:0 sqhd:0014 p:1 m:0 dnr:0 00:15:13.963 [2024-07-25 12:01:01.028077] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] Prepare to destruct SSD 00:15:13.963 [2024-07-25 12:01:01.028085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.963 [2024-07-25 12:01:01.028091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.963 [2024-07-25 12:01:01.028096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.963 [2024-07-25 12:01:01.028101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:15:13.963 [2024-07-25 12:01:01.028145] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x460001 00:15:13.963 [2024-07-25 12:01:01.028155] nvme_vfio_user.c: 49:nvme_vfio_ctrlr_set_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x14, value 0x464001 00:15:13.963 [2024-07-25 12:01:01.029153] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:13.963 [2024-07-25 12:01:01.029196] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] RTD3E = 0 us 00:15:13.963 [2024-07-25 12:01:01.029205] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown timeout = 10000 ms 00:15:13.963 [2024-07-25 12:01:01.030152] nvme_vfio_user.c: 83:nvme_vfio_ctrlr_get_reg_4: *DEBUG*: ctrlr /var/run/vfio-user/domain/vfio-user2/2: offset 0x1c, value 0x9 00:15:13.963 [2024-07-25 12:01:01.030163] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [/var/run/vfio-user/domain/vfio-user2/2] shutdown complete in 0 milliseconds 00:15:13.963 [2024-07-25 12:01:01.030209] vfio_user_pci.c: 399:spdk_vfio_user_release: *DEBUG*: Release file /var/run/vfio-user/domain/vfio-user2/2/cntrl 00:15:13.963 [2024-07-25 12:01:01.033049] vfio_user_pci.c: 96:vfio_remove_mr: *DEBUG*: Remove memory region: FD 10, VADDR 0x200000200000, IOVA 0x200000200000, Size 0x200000 00:15:13.963 are Threshold: 0% 00:15:13.963 Life Percentage Used: 0% 00:15:13.963 Data Units Read: 0 00:15:13.963 Data Units Written: 0 00:15:13.963 Host Read Commands: 0 00:15:13.963 Host Write Commands: 0 00:15:13.963 Controller Busy Time: 0 minutes 00:15:13.963 Power Cycles: 0 00:15:13.963 Power On Hours: 0 hours 00:15:13.963 Unsafe Shutdowns: 0 00:15:13.963 Unrecoverable Media Errors: 0 00:15:13.963 Lifetime Error Log Entries: 0 00:15:13.963 Warning Temperature Time: 0 minutes 00:15:13.963 Critical Temperature Time: 0 minutes 00:15:13.963 00:15:13.963 Number of Queues 00:15:13.963 ================ 00:15:13.963 Number of I/O Submission Queues: 127 00:15:13.963 Number of I/O Completion Queues: 127 00:15:13.963 00:15:13.963 Active Namespaces 00:15:13.963 ================= 00:15:13.963 Namespace ID:1 00:15:13.963 Error Recovery Timeout: Unlimited 00:15:13.963 Command Set Identifier: NVM (00h) 00:15:13.963 Deallocate: Supported 00:15:13.963 Deallocated/Unwritten Error: Not Supported 00:15:13.963 Deallocated Read Value: Unknown 00:15:13.963 Deallocate in Write Zeroes: Not Supported 00:15:13.963 Deallocated Guard Field: 0xFFFF 00:15:13.963 Flush: Supported 00:15:13.963 Reservation: Supported 00:15:13.963 Namespace Sharing Capabilities: Multiple Controllers 00:15:13.963 Size (in LBAs): 131072 (0GiB) 00:15:13.963 Capacity (in LBAs): 131072 (0GiB) 00:15:13.963 Utilization (in LBAs): 131072 (0GiB) 00:15:13.963 NGUID: 2F3D32D328EC49ACA7A473356675261D 00:15:13.963 UUID: 2f3d32d3-28ec-49ac-a7a4-73356675261d 00:15:13.963 Thin Provisioning: Not Supported 00:15:13.963 Per-NS Atomic Units: Yes 00:15:13.963 Atomic Boundary Size (Normal): 0 00:15:13.963 Atomic Boundary Size (PFail): 0 00:15:13.963 Atomic Boundary Offset: 0 00:15:13.963 Maximum Single Source Range Length: 65535 00:15:13.963 Maximum Copy Length: 65535 00:15:13.963 Maximum Source Range Count: 1 00:15:13.963 NGUID/EUI64 Never Reused: No 00:15:13.963 Namespace Write Protected: No 00:15:13.963 Number of LBA Formats: 1 00:15:13.963 Current LBA Format: LBA Format #00 00:15:13.963 LBA Format #00: Data Size: 512 Metadata Size: 0 00:15:13.963 00:15:13.963 12:01:01 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w read -t 5 -c 0x2 00:15:13.963 EAL: No free 2048 kB hugepages reported on node 1 00:15:14.224 [2024-07-25 12:01:01.245417] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:19.535 Initializing NVMe Controllers 00:15:19.535 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:19.535 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:19.535 Initialization complete. Launching workers. 00:15:19.535 ======================================================== 00:15:19.535 Latency(us) 00:15:19.535 Device Information : IOPS MiB/s Average min max 00:15:19.535 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39942.16 156.02 3205.06 980.28 9596.78 00:15:19.535 ======================================================== 00:15:19.535 Total : 39942.16 156.02 3205.06 980.28 9596.78 00:15:19.535 00:15:19.535 [2024-07-25 12:01:06.346293] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:19.535 12:01:06 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@85 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -s 256 -g -q 128 -o 4096 -w write -t 5 -c 0x2 00:15:19.535 EAL: No free 2048 kB hugepages reported on node 1 00:15:19.535 [2024-07-25 12:01:06.560912] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:24.813 Initializing NVMe Controllers 00:15:24.813 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:24.813 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 with lcore 1 00:15:24.813 Initialization complete. Launching workers. 00:15:24.813 ======================================================== 00:15:24.813 Latency(us) 00:15:24.813 Device Information : IOPS MiB/s Average min max 00:15:24.813 VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) NSID 1 from core 1: 39895.60 155.84 3210.55 999.53 10074.66 00:15:24.813 ======================================================== 00:15:24.813 Total : 39895.60 155.84 3210.55 999.53 10074.66 00:15:24.813 00:15:24.813 [2024-07-25 12:01:11.580922] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:24.813 12:01:11 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@86 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -g -q 32 -o 4096 -w randrw -M 50 -t 5 -c 0xE 00:15:24.813 EAL: No free 2048 kB hugepages reported on node 1 00:15:24.813 [2024-07-25 12:01:11.776593] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:30.124 [2024-07-25 12:01:16.913137] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:30.124 Initializing NVMe Controllers 00:15:30.124 Attaching to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:30.124 Attached to NVMe over Fabrics controller at /var/run/vfio-user/domain/vfio-user2/2:: nqn.2019-07.io.spdk:cnode2 00:15:30.124 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 1 00:15:30.124 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 2 00:15:30.124 Associating VFIOUSER (/var/run/vfio-user/domain/vfio-user2/2) with lcore 3 00:15:30.124 Initialization complete. Launching workers. 00:15:30.124 Starting thread on core 2 00:15:30.124 Starting thread on core 3 00:15:30.124 Starting thread on core 1 00:15:30.124 12:01:16 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -t 3 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -d 256 -g 00:15:30.124 EAL: No free 2048 kB hugepages reported on node 1 00:15:30.124 [2024-07-25 12:01:17.186194] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:33.418 [2024-07-25 12:01:20.267486] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.418 Initializing NVMe Controllers 00:15:33.418 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.418 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.418 Associating SPDK bdev Controller (SPDK2 ) with lcore 0 00:15:33.418 Associating SPDK bdev Controller (SPDK2 ) with lcore 1 00:15:33.418 Associating SPDK bdev Controller (SPDK2 ) with lcore 2 00:15:33.418 Associating SPDK bdev Controller (SPDK2 ) with lcore 3 00:15:33.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration run with configuration: 00:15:33.418 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i -1 00:15:33.418 Initialization complete. Launching workers. 00:15:33.418 Starting thread on core 1 with urgent priority queue 00:15:33.418 Starting thread on core 2 with urgent priority queue 00:15:33.418 Starting thread on core 3 with urgent priority queue 00:15:33.418 Starting thread on core 0 with urgent priority queue 00:15:33.418 SPDK bdev Controller (SPDK2 ) core 0: 6603.33 IO/s 15.14 secs/100000 ios 00:15:33.418 SPDK bdev Controller (SPDK2 ) core 1: 7299.33 IO/s 13.70 secs/100000 ios 00:15:33.418 SPDK bdev Controller (SPDK2 ) core 2: 7198.33 IO/s 13.89 secs/100000 ios 00:15:33.418 SPDK bdev Controller (SPDK2 ) core 3: 10452.67 IO/s 9.57 secs/100000 ios 00:15:33.418 ======================================================== 00:15:33.418 00:15:33.418 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/hello_world -d 256 -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:33.418 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.418 [2024-07-25 12:01:20.537458] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:33.418 Initializing NVMe Controllers 00:15:33.418 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.418 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:33.418 Namespace ID: 1 size: 0GB 00:15:33.418 Initialization complete. 00:15:33.418 INFO: using host memory buffer for IO 00:15:33.418 Hello world! 00:15:33.418 [2024-07-25 12:01:20.549547] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:33.418 12:01:20 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -g -d 256 -r 'trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' 00:15:33.418 EAL: No free 2048 kB hugepages reported on node 1 00:15:33.678 [2024-07-25 12:01:20.820026] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:35.058 Initializing NVMe Controllers 00:15:35.058 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.058 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.058 Initialization complete. Launching workers. 00:15:35.058 submit (in ns) avg, min, max = 4933.7, 3203.5, 3999982.6 00:15:35.058 complete (in ns) avg, min, max = 22289.1, 1768.7, 4995450.4 00:15:35.058 00:15:35.058 Submit histogram 00:15:35.058 ================ 00:15:35.058 Range in us Cumulative Count 00:15:35.058 3.200 - 3.214: 0.0123% ( 2) 00:15:35.058 3.228 - 3.242: 0.0431% ( 5) 00:15:35.058 3.242 - 3.256: 0.0863% ( 7) 00:15:35.058 3.256 - 3.270: 0.1047% ( 3) 00:15:35.058 3.270 - 3.283: 0.2403% ( 22) 00:15:35.058 3.283 - 3.297: 1.3802% ( 185) 00:15:35.058 3.297 - 3.311: 4.8490% ( 563) 00:15:35.058 3.311 - 3.325: 9.6180% ( 774) 00:15:35.058 3.325 - 3.339: 15.5453% ( 962) 00:15:35.058 3.339 - 3.353: 21.5034% ( 967) 00:15:35.058 3.353 - 3.367: 27.3814% ( 954) 00:15:35.058 3.367 - 3.381: 32.7665% ( 874) 00:15:35.058 3.381 - 3.395: 38.2748% ( 894) 00:15:35.058 3.395 - 3.409: 43.4442% ( 839) 00:15:35.058 3.409 - 3.423: 47.5786% ( 671) 00:15:35.058 3.423 - 3.437: 51.7129% ( 671) 00:15:35.058 3.437 - 3.450: 56.9747% ( 854) 00:15:35.058 3.450 - 3.464: 63.6352% ( 1081) 00:15:35.058 3.464 - 3.478: 68.1947% ( 740) 00:15:35.058 3.478 - 3.492: 72.9267% ( 768) 00:15:35.058 3.492 - 3.506: 78.2686% ( 867) 00:15:35.058 3.506 - 3.520: 82.0209% ( 609) 00:15:35.058 3.520 - 3.534: 84.4486% ( 394) 00:15:35.058 3.534 - 3.548: 86.0136% ( 254) 00:15:35.058 3.548 - 3.562: 86.7776% ( 124) 00:15:35.058 3.562 - 3.590: 87.8250% ( 170) 00:15:35.058 3.590 - 3.617: 89.1682% ( 218) 00:15:35.058 3.617 - 3.645: 90.9181% ( 284) 00:15:35.058 3.645 - 3.673: 92.7726% ( 301) 00:15:35.058 3.673 - 3.701: 94.2083% ( 233) 00:15:35.058 3.701 - 3.729: 95.9396% ( 281) 00:15:35.058 3.729 - 3.757: 97.4553% ( 246) 00:15:35.058 3.757 - 3.784: 98.4227% ( 157) 00:15:35.058 3.784 - 3.812: 98.9834% ( 91) 00:15:35.058 3.812 - 3.840: 99.3592% ( 61) 00:15:35.058 3.840 - 3.868: 99.5564% ( 32) 00:15:35.058 3.868 - 3.896: 99.6550% ( 16) 00:15:35.058 3.896 - 3.923: 99.6796% ( 4) 00:15:35.058 3.923 - 3.951: 99.6919% ( 2) 00:15:35.058 3.951 - 3.979: 99.6981% ( 1) 00:15:35.058 3.979 - 4.007: 99.7166% ( 3) 00:15:35.058 4.063 - 4.090: 99.7227% ( 1) 00:15:35.058 5.370 - 5.398: 99.7289% ( 1) 00:15:35.058 5.565 - 5.593: 99.7351% ( 1) 00:15:35.058 5.732 - 5.760: 99.7412% ( 1) 00:15:35.058 5.983 - 6.010: 99.7474% ( 1) 00:15:35.058 6.094 - 6.122: 99.7535% ( 1) 00:15:35.058 6.177 - 6.205: 99.7659% ( 2) 00:15:35.058 6.205 - 6.233: 99.7720% ( 1) 00:15:35.058 6.261 - 6.289: 99.7782% ( 1) 00:15:35.058 6.289 - 6.317: 99.7843% ( 1) 00:15:35.058 6.372 - 6.400: 99.7967% ( 2) 00:15:35.058 6.511 - 6.539: 99.8028% ( 1) 00:15:35.058 6.595 - 6.623: 99.8090% ( 1) 00:15:35.058 6.623 - 6.650: 99.8152% ( 1) 00:15:35.058 6.650 - 6.678: 99.8213% ( 1) 00:15:35.058 6.678 - 6.706: 99.8336% ( 2) 00:15:35.058 6.706 - 6.734: 99.8398% ( 1) 00:15:35.058 6.734 - 6.762: 99.8460% ( 1) 00:15:35.058 6.762 - 6.790: 99.8521% ( 1) 00:15:35.058 6.790 - 6.817: 99.8583% ( 1) 00:15:35.058 6.873 - 6.901: 99.8644% ( 1) 00:15:35.058 6.984 - 7.012: 99.8706% ( 1) 00:15:35.058 7.123 - 7.179: 99.8829% ( 2) 00:15:35.058 7.235 - 7.290: 99.8953% ( 2) 00:15:35.058 7.290 - 7.346: 99.9014% ( 1) 00:15:35.058 7.624 - 7.680: 99.9076% ( 1) 00:15:35.058 7.791 - 7.847: 99.9137% ( 1) 00:15:35.058 8.181 - 8.237: 99.9199% ( 1) 00:15:35.058 8.348 - 8.403: 99.9261% ( 1) 00:15:35.058 8.682 - 8.737: 99.9384% ( 2) 00:15:35.058 9.183 - 9.238: 99.9445% ( 1) 00:15:35.058 9.350 - 9.405: 99.9507% ( 1) 00:15:35.058 9.517 - 9.572: 99.9569% ( 1) 00:15:35.058 9.628 - 9.683: 99.9630% ( 1) 00:15:35.058 3989.148 - 4017.642: 100.0000% ( 6) 00:15:35.058 00:15:35.058 Complete histogram 00:15:35.058 ================== 00:15:35.058 Range in us Cumulative Count 00:15:35.058 1.767 - 1.774: 0.0246% ( 4) 00:15:35.058 1.774 - [2024-07-25 12:01:21.916115] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:35.058 1.781: 0.0555% ( 5) 00:15:35.058 1.781 - 1.795: 0.0801% ( 4) 00:15:35.058 1.795 - 1.809: 0.0924% ( 2) 00:15:35.058 1.809 - 1.823: 1.9100% ( 295) 00:15:35.058 1.823 - 1.837: 23.8016% ( 3553) 00:15:35.058 1.837 - 1.850: 41.3494% ( 2848) 00:15:35.058 1.850 - 1.864: 45.8903% ( 737) 00:15:35.058 1.864 - 1.878: 72.6248% ( 4339) 00:15:35.058 1.878 - 1.892: 92.4399% ( 3216) 00:15:35.058 1.892 - 1.906: 95.6993% ( 529) 00:15:35.058 1.906 - 1.920: 97.2520% ( 252) 00:15:35.058 1.920 - 1.934: 97.8866% ( 103) 00:15:35.058 1.934 - 1.948: 98.3734% ( 79) 00:15:35.058 1.948 - 1.962: 98.7924% ( 68) 00:15:35.058 1.962 - 1.976: 99.0018% ( 34) 00:15:35.058 1.976 - 1.990: 99.0943% ( 15) 00:15:35.058 1.990 - 2.003: 99.1436% ( 8) 00:15:35.058 2.003 - 2.017: 99.1497% ( 1) 00:15:35.058 2.017 - 2.031: 99.1805% ( 5) 00:15:35.058 2.031 - 2.045: 99.1867% ( 1) 00:15:35.058 2.045 - 2.059: 99.1990% ( 2) 00:15:35.058 2.059 - 2.073: 99.2052% ( 1) 00:15:35.058 2.073 - 2.087: 99.2113% ( 1) 00:15:35.058 2.087 - 2.101: 99.2175% ( 1) 00:15:35.058 2.101 - 2.115: 99.2237% ( 1) 00:15:35.058 2.129 - 2.143: 99.2421% ( 3) 00:15:35.058 2.143 - 2.157: 99.2483% ( 1) 00:15:35.058 2.268 - 2.282: 99.2545% ( 1) 00:15:35.058 2.310 - 2.323: 99.2606% ( 1) 00:15:35.058 3.812 - 3.840: 99.2668% ( 1) 00:15:35.058 4.118 - 4.146: 99.2730% ( 1) 00:15:35.058 4.174 - 4.202: 99.2791% ( 1) 00:15:35.058 4.230 - 4.257: 99.2914% ( 2) 00:15:35.058 4.452 - 4.480: 99.2976% ( 1) 00:15:35.058 4.619 - 4.647: 99.3038% ( 1) 00:15:35.058 4.647 - 4.675: 99.3099% ( 1) 00:15:35.058 5.009 - 5.037: 99.3161% ( 1) 00:15:35.058 5.037 - 5.064: 99.3222% ( 1) 00:15:35.058 5.064 - 5.092: 99.3284% ( 1) 00:15:35.058 5.203 - 5.231: 99.3346% ( 1) 00:15:35.058 5.398 - 5.426: 99.3407% ( 1) 00:15:35.058 5.482 - 5.510: 99.3469% ( 1) 00:15:35.058 5.510 - 5.537: 99.3530% ( 1) 00:15:35.058 5.677 - 5.704: 99.3592% ( 1) 00:15:35.058 5.704 - 5.732: 99.3654% ( 1) 00:15:35.058 5.760 - 5.788: 99.3715% ( 1) 00:15:35.058 5.983 - 6.010: 99.3777% ( 1) 00:15:35.058 6.094 - 6.122: 99.3839% ( 1) 00:15:35.058 6.122 - 6.150: 99.3900% ( 1) 00:15:35.058 6.233 - 6.261: 99.3962% ( 1) 00:15:35.058 6.289 - 6.317: 99.4085% ( 2) 00:15:35.058 6.344 - 6.372: 99.4147% ( 1) 00:15:35.058 6.511 - 6.539: 99.4208% ( 1) 00:15:35.058 6.623 - 6.650: 99.4270% ( 1) 00:15:35.058 6.762 - 6.790: 99.4331% ( 1) 00:15:35.058 6.790 - 6.817: 99.4393% ( 1) 00:15:35.058 7.012 - 7.040: 99.4516% ( 2) 00:15:35.058 7.096 - 7.123: 99.4578% ( 1) 00:15:35.058 9.795 - 9.850: 99.4640% ( 1) 00:15:35.058 10.463 - 10.518: 99.4701% ( 1) 00:15:35.058 13.969 - 14.024: 99.4763% ( 1) 00:15:35.058 14.247 - 14.358: 99.4824% ( 1) 00:15:35.058 39.624 - 39.847: 99.4886% ( 1) 00:15:35.058 2749.663 - 2763.910: 99.4948% ( 1) 00:15:35.058 3989.148 - 4017.642: 99.9938% ( 81) 00:15:35.058 4986.435 - 5014.929: 100.0000% ( 1) 00:15:35.058 00:15:35.058 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@90 -- # aer_vfio_user /var/run/vfio-user/domain/vfio-user2/2 nqn.2019-07.io.spdk:cnode2 2 00:15:35.058 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@22 -- # local traddr=/var/run/vfio-user/domain/vfio-user2/2 00:15:35.058 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@23 -- # local subnqn=nqn.2019-07.io.spdk:cnode2 00:15:35.058 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@24 -- # local malloc_num=Malloc4 00:15:35.058 12:01:21 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:35.058 [ 00:15:35.058 { 00:15:35.058 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:35.058 "subtype": "Discovery", 00:15:35.058 "listen_addresses": [], 00:15:35.058 "allow_any_host": true, 00:15:35.058 "hosts": [] 00:15:35.058 }, 00:15:35.058 { 00:15:35.058 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:35.058 "subtype": "NVMe", 00:15:35.058 "listen_addresses": [ 00:15:35.058 { 00:15:35.058 "trtype": "VFIOUSER", 00:15:35.058 "adrfam": "IPv4", 00:15:35.058 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:35.058 "trsvcid": "0" 00:15:35.058 } 00:15:35.058 ], 00:15:35.058 "allow_any_host": true, 00:15:35.058 "hosts": [], 00:15:35.058 "serial_number": "SPDK1", 00:15:35.058 "model_number": "SPDK bdev Controller", 00:15:35.058 "max_namespaces": 32, 00:15:35.058 "min_cntlid": 1, 00:15:35.058 "max_cntlid": 65519, 00:15:35.058 "namespaces": [ 00:15:35.058 { 00:15:35.058 "nsid": 1, 00:15:35.058 "bdev_name": "Malloc1", 00:15:35.058 "name": "Malloc1", 00:15:35.058 "nguid": "C376157790A241E7B4F262802BD73461", 00:15:35.058 "uuid": "c3761577-90a2-41e7-b4f2-62802bd73461" 00:15:35.058 }, 00:15:35.058 { 00:15:35.058 "nsid": 2, 00:15:35.058 "bdev_name": "Malloc3", 00:15:35.058 "name": "Malloc3", 00:15:35.058 "nguid": "19DB1C01B81649FF9AD36AFFBF508CD7", 00:15:35.058 "uuid": "19db1c01-b816-49ff-9ad3-6affbf508cd7" 00:15:35.058 } 00:15:35.058 ] 00:15:35.058 }, 00:15:35.058 { 00:15:35.058 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:35.058 "subtype": "NVMe", 00:15:35.058 "listen_addresses": [ 00:15:35.058 { 00:15:35.058 "trtype": "VFIOUSER", 00:15:35.058 "adrfam": "IPv4", 00:15:35.058 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:35.058 "trsvcid": "0" 00:15:35.058 } 00:15:35.058 ], 00:15:35.058 "allow_any_host": true, 00:15:35.058 "hosts": [], 00:15:35.058 "serial_number": "SPDK2", 00:15:35.058 "model_number": "SPDK bdev Controller", 00:15:35.058 "max_namespaces": 32, 00:15:35.058 "min_cntlid": 1, 00:15:35.058 "max_cntlid": 65519, 00:15:35.058 "namespaces": [ 00:15:35.058 { 00:15:35.058 "nsid": 1, 00:15:35.058 "bdev_name": "Malloc2", 00:15:35.058 "name": "Malloc2", 00:15:35.058 "nguid": "2F3D32D328EC49ACA7A473356675261D", 00:15:35.058 "uuid": "2f3d32d3-28ec-49ac-a7a4-73356675261d" 00:15:35.058 } 00:15:35.058 ] 00:15:35.058 } 00:15:35.058 ] 00:15:35.058 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@27 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:15:35.058 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:VFIOUSER traddr:/var/run/vfio-user/domain/vfio-user2/2 subnqn:nqn.2019-07.io.spdk:cnode2' -n 2 -g -t /tmp/aer_touch_file 00:15:35.058 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@34 -- # aerpid=312429 00:15:35.058 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@37 -- # waitforfile /tmp/aer_touch_file 00:15:35.058 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1265 -- # local i=0 00:15:35.058 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.058 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:15:35.058 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1276 -- # return 0 00:15:35.058 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@38 -- # rm -f /tmp/aer_touch_file 00:15:35.058 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 --name Malloc4 00:15:35.058 EAL: No free 2048 kB hugepages reported on node 1 00:15:35.058 [2024-07-25 12:01:22.269453] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: enabling controller 00:15:35.318 Malloc4 00:15:35.318 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc4 -n 2 00:15:35.318 [2024-07-25 12:01:22.522419] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user/domain/vfio-user2/2: disabling controller 00:15:35.318 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_get_subsystems 00:15:35.318 Asynchronous Event Request test 00:15:35.318 Attaching to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.318 Attached to /var/run/vfio-user/domain/vfio-user2/2 00:15:35.318 Registering asynchronous event callbacks... 00:15:35.318 Starting namespace attribute notice tests for all controllers... 00:15:35.318 /var/run/vfio-user/domain/vfio-user2/2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:15:35.318 aer_cb - Changed Namespace 00:15:35.318 Cleaning up... 00:15:35.578 [ 00:15:35.578 { 00:15:35.578 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:15:35.578 "subtype": "Discovery", 00:15:35.578 "listen_addresses": [], 00:15:35.578 "allow_any_host": true, 00:15:35.578 "hosts": [] 00:15:35.578 }, 00:15:35.578 { 00:15:35.578 "nqn": "nqn.2019-07.io.spdk:cnode1", 00:15:35.578 "subtype": "NVMe", 00:15:35.578 "listen_addresses": [ 00:15:35.578 { 00:15:35.578 "trtype": "VFIOUSER", 00:15:35.578 "adrfam": "IPv4", 00:15:35.578 "traddr": "/var/run/vfio-user/domain/vfio-user1/1", 00:15:35.578 "trsvcid": "0" 00:15:35.578 } 00:15:35.578 ], 00:15:35.578 "allow_any_host": true, 00:15:35.578 "hosts": [], 00:15:35.578 "serial_number": "SPDK1", 00:15:35.578 "model_number": "SPDK bdev Controller", 00:15:35.578 "max_namespaces": 32, 00:15:35.578 "min_cntlid": 1, 00:15:35.578 "max_cntlid": 65519, 00:15:35.578 "namespaces": [ 00:15:35.578 { 00:15:35.578 "nsid": 1, 00:15:35.578 "bdev_name": "Malloc1", 00:15:35.578 "name": "Malloc1", 00:15:35.578 "nguid": "C376157790A241E7B4F262802BD73461", 00:15:35.578 "uuid": "c3761577-90a2-41e7-b4f2-62802bd73461" 00:15:35.578 }, 00:15:35.578 { 00:15:35.578 "nsid": 2, 00:15:35.578 "bdev_name": "Malloc3", 00:15:35.578 "name": "Malloc3", 00:15:35.578 "nguid": "19DB1C01B81649FF9AD36AFFBF508CD7", 00:15:35.578 "uuid": "19db1c01-b816-49ff-9ad3-6affbf508cd7" 00:15:35.578 } 00:15:35.578 ] 00:15:35.578 }, 00:15:35.578 { 00:15:35.578 "nqn": "nqn.2019-07.io.spdk:cnode2", 00:15:35.578 "subtype": "NVMe", 00:15:35.578 "listen_addresses": [ 00:15:35.578 { 00:15:35.578 "trtype": "VFIOUSER", 00:15:35.578 "adrfam": "IPv4", 00:15:35.578 "traddr": "/var/run/vfio-user/domain/vfio-user2/2", 00:15:35.578 "trsvcid": "0" 00:15:35.578 } 00:15:35.578 ], 00:15:35.578 "allow_any_host": true, 00:15:35.578 "hosts": [], 00:15:35.578 "serial_number": "SPDK2", 00:15:35.578 "model_number": "SPDK bdev Controller", 00:15:35.578 "max_namespaces": 32, 00:15:35.578 "min_cntlid": 1, 00:15:35.578 "max_cntlid": 65519, 00:15:35.578 "namespaces": [ 00:15:35.578 { 00:15:35.578 "nsid": 1, 00:15:35.578 "bdev_name": "Malloc2", 00:15:35.578 "name": "Malloc2", 00:15:35.578 "nguid": "2F3D32D328EC49ACA7A473356675261D", 00:15:35.578 "uuid": "2f3d32d3-28ec-49ac-a7a4-73356675261d" 00:15:35.578 }, 00:15:35.578 { 00:15:35.578 "nsid": 2, 00:15:35.578 "bdev_name": "Malloc4", 00:15:35.578 "name": "Malloc4", 00:15:35.578 "nguid": "7605C8D0FDE24E26B4D75F0C50B7896C", 00:15:35.578 "uuid": "7605c8d0-fde2-4e26-b4d7-5f0c50b7896c" 00:15:35.578 } 00:15:35.578 ] 00:15:35.578 } 00:15:35.578 ] 00:15:35.578 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@44 -- # wait 312429 00:15:35.578 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@105 -- # stop_nvmf_vfio_user 00:15:35.578 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 304794 00:15:35.578 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 304794 ']' 00:15:35.578 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 304794 00:15:35.578 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:35.578 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:35.578 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 304794 00:15:35.578 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:35.578 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:35.578 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 304794' 00:15:35.578 killing process with pid 304794 00:15:35.578 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 304794 00:15:35.578 12:01:22 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 304794 00:15:35.838 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:35.838 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:35.838 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@108 -- # setup_nvmf_vfio_user --interrupt-mode '-M -I' 00:15:35.838 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@51 -- # local nvmf_app_args=--interrupt-mode 00:15:35.838 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@52 -- # local 'transport_args=-M -I' 00:15:35.838 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@55 -- # nvmfpid=312465 00:15:35.838 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@57 -- # echo 'Process pid: 312465' 00:15:35.838 Process pid: 312465 00:15:35.838 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m '[0,1,2,3]' --interrupt-mode 00:15:35.838 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@59 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:35.838 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@60 -- # waitforlisten 312465 00:15:35.838 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@829 -- # '[' -z 312465 ']' 00:15:35.838 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:35.838 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:35.838 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:35.838 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:35.838 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:35.838 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:35.838 [2024-07-25 12:01:23.083651] thread.c:2948:spdk_interrupt_mode_enable: *NOTICE*: Set SPDK running in interrupt mode. 00:15:35.838 [2024-07-25 12:01:23.084568] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:15:35.838 [2024-07-25 12:01:23.084608] [ DPDK EAL parameters: nvmf -l 0,1,2,3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:36.097 EAL: No free 2048 kB hugepages reported on node 1 00:15:36.097 [2024-07-25 12:01:23.140049] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:15:36.097 [2024-07-25 12:01:23.218587] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:36.097 [2024-07-25 12:01:23.218625] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:36.097 [2024-07-25 12:01:23.218632] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:36.097 [2024-07-25 12:01:23.218638] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:36.098 [2024-07-25 12:01:23.218643] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:36.098 [2024-07-25 12:01:23.218679] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:36.098 [2024-07-25 12:01:23.218775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:36.098 [2024-07-25 12:01:23.218860] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:15:36.098 [2024-07-25 12:01:23.218861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:36.098 [2024-07-25 12:01:23.296050] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_000) to intr mode from intr mode. 00:15:36.098 [2024-07-25 12:01:23.296200] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_001) to intr mode from intr mode. 00:15:36.098 [2024-07-25 12:01:23.296366] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_002) to intr mode from intr mode. 00:15:36.098 [2024-07-25 12:01:23.296735] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (app_thread) to intr mode from intr mode. 00:15:36.098 [2024-07-25 12:01:23.296981] thread.c:2099:spdk_thread_set_interrupt_mode: *NOTICE*: Set spdk_thread (nvmf_tgt_poll_group_003) to intr mode from intr mode. 00:15:36.665 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:36.665 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@862 -- # return 0 00:15:36.665 12:01:23 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@62 -- # sleep 1 00:15:38.045 12:01:24 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t VFIOUSER -M -I 00:15:38.045 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@66 -- # mkdir -p /var/run/vfio-user 00:15:38.045 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # seq 1 2 00:15:38.045 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:38.045 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user1/1 00:15:38.045 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:15:38.045 Malloc1 00:15:38.045 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode1 -a -s SPDK1 00:15:38.305 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode1 Malloc1 00:15:38.565 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode1 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user1/1 -s 0 00:15:38.565 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@68 -- # for i in $(seq 1 $NUM_DEVICES) 00:15:38.565 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@69 -- # mkdir -p /var/run/vfio-user/domain/vfio-user2/2 00:15:38.565 12:01:25 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc2 00:15:38.825 Malloc2 00:15:38.825 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2019-07.io.spdk:cnode2 -a -s SPDK2 00:15:39.085 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2019-07.io.spdk:cnode2 Malloc2 00:15:39.345 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@74 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2019-07.io.spdk:cnode2 -t VFIOUSER -a /var/run/vfio-user/domain/vfio-user2/2 -s 0 00:15:39.345 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@109 -- # stop_nvmf_vfio_user 00:15:39.345 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@95 -- # killprocess 312465 00:15:39.345 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@948 -- # '[' -z 312465 ']' 00:15:39.345 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@952 -- # kill -0 312465 00:15:39.345 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # uname 00:15:39.345 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:39.345 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 312465 00:15:39.605 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:39.605 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:39.605 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@966 -- # echo 'killing process with pid 312465' 00:15:39.605 killing process with pid 312465 00:15:39.605 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@967 -- # kill 312465 00:15:39.605 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@972 -- # wait 312465 00:15:39.605 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@97 -- # rm -rf /var/run/vfio-user 00:15:39.605 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- target/nvmf_vfio_user.sh@99 -- # trap - SIGINT SIGTERM EXIT 00:15:39.605 00:15:39.605 real 0m51.344s 00:15:39.605 user 3m23.232s 00:15:39.605 sys 0m3.673s 00:15:39.605 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:39.605 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user -- common/autotest_common.sh@10 -- # set +x 00:15:39.605 ************************************ 00:15:39.605 END TEST nvmf_vfio_user 00:15:39.605 ************************************ 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@32 -- # run_test nvmf_vfio_user_nvme_compliance /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:39.866 ************************************ 00:15:39.866 START TEST nvmf_vfio_user_nvme_compliance 00:15:39.866 ************************************ 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/compliance.sh --transport=tcp 00:15:39.866 * Looking for test storage... 00:15:39.866 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # uname -s 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:39.866 12:01:26 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@5 -- # export PATH 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@47 -- # : 0 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@11 -- # MALLOC_BDEV_SIZE=64 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # export TEST_TRANSPORT=VFIOUSER 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@14 -- # TEST_TRANSPORT=VFIOUSER 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@16 -- # rm -rf /var/run/vfio-user 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@20 -- # nvmfpid=313214 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@21 -- # echo 'Process pid: 313214' 00:15:39.866 Process pid: 313214 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@23 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x7 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@24 -- # waitforlisten 313214 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@829 -- # '[' -z 313214 ']' 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:39.866 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:39.866 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:39.866 [2024-07-25 12:01:27.066343] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:15:39.866 [2024-07-25 12:01:27.066390] [ DPDK EAL parameters: nvmf -c 0x7 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:39.866 EAL: No free 2048 kB hugepages reported on node 1 00:15:40.126 [2024-07-25 12:01:27.117893] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:40.126 [2024-07-25 12:01:27.190123] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:15:40.126 [2024-07-25 12:01:27.190164] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:15:40.126 [2024-07-25 12:01:27.190171] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:15:40.126 [2024-07-25 12:01:27.190177] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:15:40.126 [2024-07-25 12:01:27.190182] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:15:40.126 [2024-07-25 12:01:27.190272] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:15:40.126 [2024-07-25 12:01:27.190368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:15:40.126 [2024-07-25 12:01:27.190370] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:15:40.696 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:40.696 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@862 -- # return 0 00:15:40.696 12:01:27 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@26 -- # sleep 1 00:15:41.635 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@28 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:41.635 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@29 -- # traddr=/var/run/vfio-user 00:15:41.635 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@31 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:41.635 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.635 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.635 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.635 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@33 -- # mkdir -p /var/run/vfio-user 00:15:41.635 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@35 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:41.635 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.635 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.895 malloc0 00:15:41.895 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.895 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@36 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk -m 32 00:15:41.895 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.895 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.895 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.895 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@37 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:41.895 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.895 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.895 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.895 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@38 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:41.895 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:41.895 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:41.895 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:41.895 12:01:28 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/compliance/nvme_compliance -g -r 'trtype:VFIOUSER traddr:/var/run/vfio-user subnqn:nqn.2021-09.io.spdk:cnode0' 00:15:41.895 EAL: No free 2048 kB hugepages reported on node 1 00:15:41.895 00:15:41.895 00:15:41.895 CUnit - A unit testing framework for C - Version 2.1-3 00:15:41.895 http://cunit.sourceforge.net/ 00:15:41.895 00:15:41.895 00:15:41.895 Suite: nvme_compliance 00:15:41.895 Test: admin_identify_ctrlr_verify_dptr ...[2024-07-25 12:01:29.081130] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:41.895 [2024-07-25 12:01:29.082481] vfio_user.c: 804:nvme_cmd_map_prps: *ERROR*: no PRP2, 3072 remaining 00:15:41.895 [2024-07-25 12:01:29.082497] vfio_user.c:5514:map_admin_cmd_req: *ERROR*: /var/run/vfio-user: map Admin Opc 6 failed 00:15:41.895 [2024-07-25 12:01:29.082503] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x6 failed 00:15:41.895 [2024-07-25 12:01:29.084147] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:41.895 passed 00:15:42.155 Test: admin_identify_ctrlr_verify_fused ...[2024-07-25 12:01:29.162690] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.155 [2024-07-25 12:01:29.165715] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.155 passed 00:15:42.155 Test: admin_identify_ns ...[2024-07-25 12:01:29.242622] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.155 [2024-07-25 12:01:29.306050] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 0 00:15:42.155 [2024-07-25 12:01:29.314050] ctrlr.c:2740:_nvmf_ctrlr_get_ns_safe: *ERROR*: Identify Namespace for invalid NSID 4294967295 00:15:42.155 [2024-07-25 12:01:29.335143] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.155 passed 00:15:42.415 Test: admin_get_features_mandatory_features ...[2024-07-25 12:01:29.408401] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.415 [2024-07-25 12:01:29.412422] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.415 passed 00:15:42.415 Test: admin_get_features_optional_features ...[2024-07-25 12:01:29.488933] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.415 [2024-07-25 12:01:29.491952] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.415 passed 00:15:42.415 Test: admin_set_features_number_of_queues ...[2024-07-25 12:01:29.569843] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.674 [2024-07-25 12:01:29.675135] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.674 passed 00:15:42.674 Test: admin_get_log_page_mandatory_logs ...[2024-07-25 12:01:29.749304] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.674 [2024-07-25 12:01:29.752325] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.674 passed 00:15:42.674 Test: admin_get_log_page_with_lpo ...[2024-07-25 12:01:29.830256] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.674 [2024-07-25 12:01:29.901053] ctrlr.c:2688:nvmf_ctrlr_get_log_page: *ERROR*: Get log page: offset (516) > len (512) 00:15:42.674 [2024-07-25 12:01:29.914131] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.934 passed 00:15:42.934 Test: fabric_property_get ...[2024-07-25 12:01:29.988341] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.934 [2024-07-25 12:01:29.989570] vfio_user.c:5607:handle_cmd_req: *ERROR*: /var/run/vfio-user: process NVMe command opc 0x7f failed 00:15:42.934 [2024-07-25 12:01:29.991364] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.934 passed 00:15:42.934 Test: admin_delete_io_sq_use_admin_qid ...[2024-07-25 12:01:30.071990] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:42.934 [2024-07-25 12:01:30.073311] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:0 does not exist 00:15:42.934 [2024-07-25 12:01:30.074998] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:42.934 passed 00:15:42.934 Test: admin_delete_io_sq_delete_sq_twice ...[2024-07-25 12:01:30.153138] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.193 [2024-07-25 12:01:30.238054] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:43.193 [2024-07-25 12:01:30.254055] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:43.193 [2024-07-25 12:01:30.259220] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.193 passed 00:15:43.193 Test: admin_delete_io_cq_use_admin_qid ...[2024-07-25 12:01:30.335615] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.193 [2024-07-25 12:01:30.336850] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O cqid:0 does not exist 00:15:43.193 [2024-07-25 12:01:30.340641] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.193 passed 00:15:43.193 Test: admin_delete_io_cq_delete_cq_first ...[2024-07-25 12:01:30.417570] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.453 [2024-07-25 12:01:30.493053] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:43.453 [2024-07-25 12:01:30.517058] vfio_user.c:2309:handle_del_io_q: *ERROR*: /var/run/vfio-user: I/O sqid:1 does not exist 00:15:43.453 [2024-07-25 12:01:30.522132] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.453 passed 00:15:43.453 Test: admin_create_io_cq_verify_iv_pc ...[2024-07-25 12:01:30.600109] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.453 [2024-07-25 12:01:30.601337] vfio_user.c:2158:handle_create_io_cq: *ERROR*: /var/run/vfio-user: IV is too big 00:15:43.453 [2024-07-25 12:01:30.601361] vfio_user.c:2152:handle_create_io_cq: *ERROR*: /var/run/vfio-user: non-PC CQ not supported 00:15:43.453 [2024-07-25 12:01:30.603115] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.453 passed 00:15:43.453 Test: admin_create_io_sq_verify_qsize_cqid ...[2024-07-25 12:01:30.681010] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.712 [2024-07-25 12:01:30.774047] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 1 00:15:43.712 [2024-07-25 12:01:30.780055] vfio_user.c:2240:handle_create_io_q: *ERROR*: /var/run/vfio-user: invalid I/O queue size 257 00:15:43.712 [2024-07-25 12:01:30.790048] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:0 00:15:43.712 [2024-07-25 12:01:30.798051] vfio_user.c:2038:handle_create_io_sq: *ERROR*: /var/run/vfio-user: invalid cqid:128 00:15:43.712 [2024-07-25 12:01:30.827127] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.712 passed 00:15:43.712 Test: admin_create_io_sq_verify_pc ...[2024-07-25 12:01:30.903277] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:43.712 [2024-07-25 12:01:30.920056] vfio_user.c:2051:handle_create_io_sq: *ERROR*: /var/run/vfio-user: non-PC SQ not supported 00:15:43.712 [2024-07-25 12:01:30.937430] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:43.972 passed 00:15:43.972 Test: admin_create_io_qp_max_qps ...[2024-07-25 12:01:31.014966] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:44.911 [2024-07-25 12:01:32.129051] nvme_ctrlr.c:5465:spdk_nvme_ctrlr_alloc_qid: *ERROR*: [/var/run/vfio-user] No free I/O queue IDs 00:15:45.480 [2024-07-25 12:01:32.520890] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.480 passed 00:15:45.480 Test: admin_create_io_sq_shared_cq ...[2024-07-25 12:01:32.598013] vfio_user.c:2836:enable_ctrlr: *NOTICE*: /var/run/vfio-user: enabling controller 00:15:45.740 [2024-07-25 12:01:32.732051] vfio_user.c:2319:handle_del_io_q: *ERROR*: /var/run/vfio-user: the associated SQ must be deleted first 00:15:45.740 [2024-07-25 12:01:32.769106] vfio_user.c:2798:disable_ctrlr: *NOTICE*: /var/run/vfio-user: disabling controller 00:15:45.740 passed 00:15:45.740 00:15:45.740 Run Summary: Type Total Ran Passed Failed Inactive 00:15:45.740 suites 1 1 n/a 0 0 00:15:45.740 tests 18 18 18 0 0 00:15:45.740 asserts 360 360 360 0 n/a 00:15:45.740 00:15:45.740 Elapsed time = 1.521 seconds 00:15:45.740 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@42 -- # killprocess 313214 00:15:45.740 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@948 -- # '[' -z 313214 ']' 00:15:45.740 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@952 -- # kill -0 313214 00:15:45.740 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # uname 00:15:45.740 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:15:45.740 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 313214 00:15:45.740 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:15:45.740 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:15:45.740 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@966 -- # echo 'killing process with pid 313214' 00:15:45.740 killing process with pid 313214 00:15:45.740 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@967 -- # kill 313214 00:15:45.740 12:01:32 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@972 -- # wait 313214 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@44 -- # rm -rf /var/run/vfio-user 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- compliance/compliance.sh@46 -- # trap - SIGINT SIGTERM EXIT 00:15:46.003 00:15:46.003 real 0m6.166s 00:15:46.003 user 0m17.635s 00:15:46.003 sys 0m0.436s 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@1124 -- # xtrace_disable 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_nvme_compliance -- common/autotest_common.sh@10 -- # set +x 00:15:46.003 ************************************ 00:15:46.003 END TEST nvmf_vfio_user_nvme_compliance 00:15:46.003 ************************************ 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@33 -- # run_test nvmf_vfio_user_fuzz /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:15:46.003 ************************************ 00:15:46.003 START TEST nvmf_vfio_user_fuzz 00:15:46.003 ************************************ 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/vfio_user_fuzz.sh --transport=tcp 00:15:46.003 * Looking for test storage... 00:15:46.003 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # uname -s 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.003 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@5 -- # export PATH 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@47 -- # : 0 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- nvmf/common.sh@51 -- # have_pci_nics=0 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@12 -- # MALLOC_BDEV_SIZE=64 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@15 -- # nqn=nqn.2021-09.io.spdk:cnode0 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@16 -- # traddr=/var/run/vfio-user 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # export TEST_TRANSPORT=VFIOUSER 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@18 -- # TEST_TRANSPORT=VFIOUSER 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@20 -- # rm -rf /var/run/vfio-user 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@24 -- # nvmfpid=314387 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@25 -- # echo 'Process pid: 314387' 00:15:46.004 Process pid: 314387 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@27 -- # trap 'killprocess $nvmfpid; exit 1' SIGINT SIGTERM EXIT 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@28 -- # waitforlisten 314387 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@829 -- # '[' -z 314387 ']' 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@834 -- # local max_retries=100 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:46.004 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@838 -- # xtrace_disable 00:15:46.004 12:01:33 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:46.973 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:15:46.973 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@862 -- # return 0 00:15:46.973 12:01:34 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@30 -- # sleep 1 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@32 -- # rpc_cmd nvmf_create_transport -t VFIOUSER 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@34 -- # mkdir -p /var/run/vfio-user 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b malloc0 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.912 malloc0 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2021-09.io.spdk:cnode0 -a -s spdk 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2021-09.io.spdk:cnode0 malloc0 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@39 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2021-09.io.spdk:cnode0 -t VFIOUSER -a /var/run/vfio-user -s 0 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@41 -- # trid='trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' 00:15:47.912 12:01:35 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/fuzz/nvme_fuzz/nvme_fuzz -m 0x2 -t 30 -S 123456 -F 'trtype:VFIOUSER subnqn:nqn.2021-09.io.spdk:cnode0 traddr:/var/run/vfio-user' -N -a 00:16:20.005 Fuzzing completed. Shutting down the fuzz application 00:16:20.005 00:16:20.005 Dumping successful admin opcodes: 00:16:20.005 8, 9, 10, 24, 00:16:20.005 Dumping successful io opcodes: 00:16:20.005 0, 00:16:20.005 NS: 0x200003a1ef00 I/O qp, Total commands completed: 1159819, total successful commands: 4559, random_seed: 3652781824 00:16:20.005 NS: 0x200003a1ef00 admin qp, Total commands completed: 289139, total successful commands: 2334, random_seed: 2947298944 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@44 -- # rpc_cmd nvmf_delete_subsystem nqn.2021-09.io.spdk:cnode0 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@46 -- # killprocess 314387 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@948 -- # '[' -z 314387 ']' 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@952 -- # kill -0 314387 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # uname 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 314387 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@966 -- # echo 'killing process with pid 314387' 00:16:20.005 killing process with pid 314387 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@967 -- # kill 314387 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@972 -- # wait 314387 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@48 -- # rm -rf /var/run/vfio-user /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_log.txt /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/vfio_user_fuzz_tgt_output.txt 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- target/vfio_user_fuzz.sh@50 -- # trap - SIGINT SIGTERM EXIT 00:16:20.005 00:16:20.005 real 0m32.724s 00:16:20.005 user 0m35.468s 00:16:20.005 sys 0m26.680s 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@1124 -- # xtrace_disable 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_vfio_user_fuzz -- common/autotest_common.sh@10 -- # set +x 00:16:20.005 ************************************ 00:16:20.005 END TEST nvmf_vfio_user_fuzz 00:16:20.005 ************************************ 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@37 -- # run_test nvmf_auth_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:16:20.005 ************************************ 00:16:20.005 START TEST nvmf_auth_target 00:16:20.005 ************************************ 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/auth.sh --transport=tcp 00:16:20.005 * Looking for test storage... 00:16:20.005 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:16:20.005 12:02:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # uname -s 00:16:20.005 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:16:20.005 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:16:20.005 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:16:20.005 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:16:20.005 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:16:20.005 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:16:20.005 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:16:20.005 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:16:20.005 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:16:20.005 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:16:20.005 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.005 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@5 -- # export PATH 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@47 -- # : 0 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@14 -- # dhgroups=("null" "ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@15 -- # subnqn=nqn.2024-03.io.spdk:cnode0 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@16 -- # hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@17 -- # hostsock=/var/tmp/host.sock 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # keys=() 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@18 -- # ckeys=() 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@59 -- # nvmftestinit 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@285 -- # xtrace_disable 00:16:20.006 12:02:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # pci_devs=() 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # net_devs=() 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # e810=() 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@296 -- # local -ga e810 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # x722=() 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@297 -- # local -ga x722 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # mlx=() 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@298 -- # local -ga mlx 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:16:24.207 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:16:24.208 Found 0000:86:00.0 (0x8086 - 0x159b) 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:16:24.208 Found 0000:86:00.1 (0x8086 - 0x159b) 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:16:24.208 Found net devices under 0000:86:00.0: cvl_0_0 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:16:24.208 Found net devices under 0000:86:00.1: cvl_0_1 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@414 -- # is_hw=yes 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:16:24.208 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:16:24.208 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.284 ms 00:16:24.208 00:16:24.208 --- 10.0.0.2 ping statistics --- 00:16:24.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.208 rtt min/avg/max/mdev = 0.284/0.284/0.284/0.000 ms 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:16:24.208 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:16:24.208 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.287 ms 00:16:24.208 00:16:24.208 --- 10.0.0.1 ping statistics --- 00:16:24.208 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:16:24.208 rtt min/avg/max/mdev = 0.287/0.287/0.287/0.000 ms 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@422 -- # return 0 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@60 -- # nvmfappstart -L nvmf_auth 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=322709 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 322709 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvmf_auth 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 322709 ']' 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:24.208 12:02:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@62 -- # hostpid=322766 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt -m 2 -r /var/tmp/host.sock -L nvme_auth 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@64 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key null 48 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=null 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=7e0195921ad6ba2da6675fe77e858a2215f07e52399f0e58 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.DdM 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 7e0195921ad6ba2da6675fe77e858a2215f07e52399f0e58 0 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 7e0195921ad6ba2da6675fe77e858a2215f07e52399f0e58 0 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=7e0195921ad6ba2da6675fe77e858a2215f07e52399f0e58 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=0 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.DdM 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.DdM 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # keys[0]=/tmp/spdk.key-null.DdM 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # gen_dhchap_key sha512 64 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=489dcce5af57d17cfc7e05dd6493aadb932dc0bcedb0b93db0b355af07e37308 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.kIp 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 489dcce5af57d17cfc7e05dd6493aadb932dc0bcedb0b93db0b355af07e37308 3 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 489dcce5af57d17cfc7e05dd6493aadb932dc0bcedb0b93db0b355af07e37308 3 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=489dcce5af57d17cfc7e05dd6493aadb932dc0bcedb0b93db0b355af07e37308 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.kIp 00:16:25.150 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.kIp 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@67 -- # ckeys[0]=/tmp/spdk.key-sha512.kIp 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha256 32 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=2696f9451e8ca8e880d8fe5125ffa222 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Hqz 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 2696f9451e8ca8e880d8fe5125ffa222 1 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 2696f9451e8ca8e880d8fe5125ffa222 1 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=2696f9451e8ca8e880d8fe5125ffa222 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Hqz 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Hqz 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # keys[1]=/tmp/spdk.key-sha256.Hqz 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # gen_dhchap_key sha384 48 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=de514c14bb6861f7240d49ab886e5a1d234504e3eeb0ded4 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.vXx 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key de514c14bb6861f7240d49ab886e5a1d234504e3eeb0ded4 2 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 de514c14bb6861f7240d49ab886e5a1d234504e3eeb0ded4 2 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=de514c14bb6861f7240d49ab886e5a1d234504e3eeb0ded4 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.vXx 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.vXx 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@68 -- # ckeys[1]=/tmp/spdk.key-sha384.vXx 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha384 48 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha384 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=48 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=164399a0a7f1825ca0470685788f162840fb78c539537f1b 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.nfo 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 164399a0a7f1825ca0470685788f162840fb78c539537f1b 2 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 164399a0a7f1825ca0470685788f162840fb78c539537f1b 2 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=164399a0a7f1825ca0470685788f162840fb78c539537f1b 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=2 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.nfo 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.nfo 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # keys[2]=/tmp/spdk.key-sha384.nfo 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # gen_dhchap_key sha256 32 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha256 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=32 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=1712eb9d97e11a0f4ec8bb5c0327dfb4 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.I7M 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 1712eb9d97e11a0f4ec8bb5c0327dfb4 1 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 1712eb9d97e11a0f4ec8bb5c0327dfb4 1 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=1712eb9d97e11a0f4ec8bb5c0327dfb4 00:16:25.412 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=1 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.I7M 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.I7M 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@69 -- # ckeys[2]=/tmp/spdk.key-sha256.I7M 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # gen_dhchap_key sha512 64 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@723 -- # local digest len file key 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@724 -- # local -A digests 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # digest=sha512 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@726 -- # len=64 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@727 -- # key=8e97dada2b04fe02a16cb67c7acea9875ecd37ae03bdd1b04c9b2e2bc4c8a14b 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.OlH 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@729 -- # format_dhchap_key 8e97dada2b04fe02a16cb67c7acea9875ecd37ae03bdd1b04c9b2e2bc4c8a14b 3 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@719 -- # format_key DHHC-1 8e97dada2b04fe02a16cb67c7acea9875ecd37ae03bdd1b04c9b2e2bc4c8a14b 3 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@702 -- # local prefix key digest 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # key=8e97dada2b04fe02a16cb67c7acea9875ecd37ae03bdd1b04c9b2e2bc4c8a14b 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@704 -- # digest=3 00:16:25.413 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@705 -- # python - 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.OlH 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.OlH 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # keys[3]=/tmp/spdk.key-sha512.OlH 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@70 -- # ckeys[3]= 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@72 -- # waitforlisten 322709 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 322709 ']' 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@73 -- # waitforlisten 322766 /var/tmp/host.sock 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 322766 ']' 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/host.sock 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock...' 00:16:25.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/host.sock... 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:16:25.673 12:02:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.932 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:16:25.932 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:16:25.932 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@74 -- # rpc_cmd 00:16:25.932 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.932 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.932 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.932 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:25.932 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.DdM 00:16:25.932 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:25.932 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:25.932 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:25.932 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key0 /tmp/spdk.key-null.DdM 00:16:25.932 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key0 /tmp/spdk.key-null.DdM 00:16:26.192 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha512.kIp ]] 00:16:26.192 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kIp 00:16:26.192 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.192 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.192 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.192 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kIp 00:16:26.192 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey0 /tmp/spdk.key-sha512.kIp 00:16:26.451 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:26.451 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-sha256.Hqz 00:16:26.451 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.451 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.451 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.451 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key1 /tmp/spdk.key-sha256.Hqz 00:16:26.451 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key1 /tmp/spdk.key-sha256.Hqz 00:16:26.451 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha384.vXx ]] 00:16:26.451 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vXx 00:16:26.451 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.451 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.451 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.451 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vXx 00:16:26.451 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey1 /tmp/spdk.key-sha384.vXx 00:16:26.710 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:26.710 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha384.nfo 00:16:26.710 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.710 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.710 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.710 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key2 /tmp/spdk.key-sha384.nfo 00:16:26.710 12:02:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key2 /tmp/spdk.key-sha384.nfo 00:16:26.969 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n /tmp/spdk.key-sha256.I7M ]] 00:16:26.969 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@85 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I7M 00:16:26.969 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.969 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.969 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.969 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@86 -- # hostrpc keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I7M 00:16:26.969 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key ckey2 /tmp/spdk.key-sha256.I7M 00:16:26.969 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@81 -- # for i in "${!keys[@]}" 00:16:26.969 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@82 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha512.OlH 00:16:26.969 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:26.969 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:26.969 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:26.969 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@83 -- # hostrpc keyring_file_add_key key3 /tmp/spdk.key-sha512.OlH 00:16:26.969 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock keyring_file_add_key key3 /tmp/spdk.key-sha512.OlH 00:16:27.229 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@84 -- # [[ -n '' ]] 00:16:27.229 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:16:27.229 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:27.229 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:27.229 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.229 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:27.489 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 0 00:16:27.489 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:27.489 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:27.489 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:27.489 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:27.489 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:27.489 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.489 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.489 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.489 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.489 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.489 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:27.748 00:16:27.748 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:27.748 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:27.748 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:27.748 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:27.748 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:27.748 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:27.748 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:27.748 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:27.748 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:27.748 { 00:16:27.748 "cntlid": 1, 00:16:27.748 "qid": 0, 00:16:27.748 "state": "enabled", 00:16:27.748 "thread": "nvmf_tgt_poll_group_000", 00:16:27.748 "listen_address": { 00:16:27.748 "trtype": "TCP", 00:16:27.748 "adrfam": "IPv4", 00:16:27.748 "traddr": "10.0.0.2", 00:16:27.748 "trsvcid": "4420" 00:16:27.748 }, 00:16:27.748 "peer_address": { 00:16:27.748 "trtype": "TCP", 00:16:27.748 "adrfam": "IPv4", 00:16:27.748 "traddr": "10.0.0.1", 00:16:27.748 "trsvcid": "56556" 00:16:27.748 }, 00:16:27.748 "auth": { 00:16:27.748 "state": "completed", 00:16:27.748 "digest": "sha256", 00:16:27.748 "dhgroup": "null" 00:16:27.748 } 00:16:27.748 } 00:16:27.748 ]' 00:16:27.748 12:02:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:28.008 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:28.008 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:28.008 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:28.008 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:28.008 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:28.008 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:28.008 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:28.267 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:28.834 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 1 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:28.834 12:02:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:29.094 00:16:29.094 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:29.094 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:29.094 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:29.356 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:29.356 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:29.356 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:29.356 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:29.356 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:29.356 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:29.356 { 00:16:29.356 "cntlid": 3, 00:16:29.356 "qid": 0, 00:16:29.356 "state": "enabled", 00:16:29.356 "thread": "nvmf_tgt_poll_group_000", 00:16:29.356 "listen_address": { 00:16:29.356 "trtype": "TCP", 00:16:29.356 "adrfam": "IPv4", 00:16:29.356 "traddr": "10.0.0.2", 00:16:29.356 "trsvcid": "4420" 00:16:29.356 }, 00:16:29.356 "peer_address": { 00:16:29.356 "trtype": "TCP", 00:16:29.356 "adrfam": "IPv4", 00:16:29.356 "traddr": "10.0.0.1", 00:16:29.356 "trsvcid": "56586" 00:16:29.356 }, 00:16:29.356 "auth": { 00:16:29.356 "state": "completed", 00:16:29.356 "digest": "sha256", 00:16:29.356 "dhgroup": "null" 00:16:29.356 } 00:16:29.356 } 00:16:29.356 ]' 00:16:29.356 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:29.356 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:29.356 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:29.356 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:29.356 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:29.356 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:29.356 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:29.356 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:29.734 12:02:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjY5NmY5NDUxZThjYThlODgwZDhmZTUxMjVmZmEyMjLroygh: --dhchap-ctrl-secret DHHC-1:02:ZGU1MTRjMTRiYjY4NjFmNzI0MGQ0OWFiODg2ZTVhMWQyMzQ1MDRlM2VlYjBkZWQ0CdORWg==: 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:30.303 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 2 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.303 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:30.563 00:16:30.563 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:30.563 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:30.563 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:30.823 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:30.823 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:30.823 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:30.823 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:30.823 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:30.823 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:30.823 { 00:16:30.823 "cntlid": 5, 00:16:30.823 "qid": 0, 00:16:30.823 "state": "enabled", 00:16:30.823 "thread": "nvmf_tgt_poll_group_000", 00:16:30.823 "listen_address": { 00:16:30.823 "trtype": "TCP", 00:16:30.823 "adrfam": "IPv4", 00:16:30.823 "traddr": "10.0.0.2", 00:16:30.823 "trsvcid": "4420" 00:16:30.823 }, 00:16:30.823 "peer_address": { 00:16:30.823 "trtype": "TCP", 00:16:30.823 "adrfam": "IPv4", 00:16:30.823 "traddr": "10.0.0.1", 00:16:30.823 "trsvcid": "50110" 00:16:30.823 }, 00:16:30.823 "auth": { 00:16:30.823 "state": "completed", 00:16:30.823 "digest": "sha256", 00:16:30.823 "dhgroup": "null" 00:16:30.823 } 00:16:30.823 } 00:16:30.823 ]' 00:16:30.823 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:30.823 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:30.823 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:30.823 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:30.823 12:02:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:30.823 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:30.823 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:30.823 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:31.082 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTY0Mzk5YTBhN2YxODI1Y2EwNDcwNjg1Nzg4ZjE2Mjg0MGZiNzhjNTM5NTM3ZjFiPPhUJw==: --dhchap-ctrl-secret DHHC-1:01:MTcxMmViOWQ5N2UxMWEwZjRlYzhiYjVjMDMyN2RmYjQJGMWy: 00:16:31.650 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:31.650 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:31.650 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:31.650 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.650 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.650 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.651 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:31.651 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:31.651 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups null 00:16:31.651 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 null 3 00:16:31.651 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:31.651 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:31.651 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:16:31.651 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:31.651 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:31.651 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:31.651 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:31.651 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:31.910 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:31.911 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.911 12:02:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:31.911 00:16:31.911 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:31.911 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:31.911 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:32.171 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:32.171 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:32.171 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.171 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.171 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.171 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:32.171 { 00:16:32.171 "cntlid": 7, 00:16:32.171 "qid": 0, 00:16:32.171 "state": "enabled", 00:16:32.171 "thread": "nvmf_tgt_poll_group_000", 00:16:32.171 "listen_address": { 00:16:32.171 "trtype": "TCP", 00:16:32.171 "adrfam": "IPv4", 00:16:32.171 "traddr": "10.0.0.2", 00:16:32.171 "trsvcid": "4420" 00:16:32.171 }, 00:16:32.171 "peer_address": { 00:16:32.171 "trtype": "TCP", 00:16:32.171 "adrfam": "IPv4", 00:16:32.171 "traddr": "10.0.0.1", 00:16:32.171 "trsvcid": "50144" 00:16:32.171 }, 00:16:32.171 "auth": { 00:16:32.171 "state": "completed", 00:16:32.171 "digest": "sha256", 00:16:32.171 "dhgroup": "null" 00:16:32.171 } 00:16:32.171 } 00:16:32.171 ]' 00:16:32.171 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:32.171 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:32.171 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:32.431 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:16:32.431 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:32.431 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:32.431 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:32.431 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:32.431 12:02:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:16:32.999 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:32.999 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:32.999 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:32.999 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:32.999 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:32.999 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:32.999 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:32.999 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:32.999 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:32.999 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:33.258 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 0 00:16:33.258 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:33.258 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:33.258 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:33.258 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:33.258 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:33.258 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.258 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.258 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.258 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.258 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.258 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:33.517 00:16:33.517 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:33.517 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:33.517 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:33.776 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:33.776 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:33.776 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:33.776 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:33.776 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:33.776 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:33.776 { 00:16:33.776 "cntlid": 9, 00:16:33.776 "qid": 0, 00:16:33.776 "state": "enabled", 00:16:33.776 "thread": "nvmf_tgt_poll_group_000", 00:16:33.776 "listen_address": { 00:16:33.776 "trtype": "TCP", 00:16:33.776 "adrfam": "IPv4", 00:16:33.776 "traddr": "10.0.0.2", 00:16:33.776 "trsvcid": "4420" 00:16:33.776 }, 00:16:33.776 "peer_address": { 00:16:33.776 "trtype": "TCP", 00:16:33.776 "adrfam": "IPv4", 00:16:33.776 "traddr": "10.0.0.1", 00:16:33.776 "trsvcid": "50162" 00:16:33.776 }, 00:16:33.776 "auth": { 00:16:33.776 "state": "completed", 00:16:33.776 "digest": "sha256", 00:16:33.776 "dhgroup": "ffdhe2048" 00:16:33.776 } 00:16:33.776 } 00:16:33.776 ]' 00:16:33.776 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:33.776 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:33.776 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:33.776 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:33.776 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:33.776 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:33.776 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:33.776 12:02:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:34.035 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:16:34.605 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:34.605 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:34.605 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:34.605 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.605 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.605 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.605 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:34.605 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.605 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:34.605 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 1 00:16:34.605 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:34.606 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:34.606 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:34.606 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:34.606 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:34.606 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.606 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:34.606 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:34.866 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:34.866 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.866 12:02:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:34.866 00:16:34.866 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:34.866 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:34.866 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:35.126 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:35.126 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:35.126 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.126 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.126 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.126 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:35.126 { 00:16:35.126 "cntlid": 11, 00:16:35.126 "qid": 0, 00:16:35.126 "state": "enabled", 00:16:35.126 "thread": "nvmf_tgt_poll_group_000", 00:16:35.126 "listen_address": { 00:16:35.126 "trtype": "TCP", 00:16:35.126 "adrfam": "IPv4", 00:16:35.126 "traddr": "10.0.0.2", 00:16:35.126 "trsvcid": "4420" 00:16:35.126 }, 00:16:35.126 "peer_address": { 00:16:35.126 "trtype": "TCP", 00:16:35.126 "adrfam": "IPv4", 00:16:35.126 "traddr": "10.0.0.1", 00:16:35.126 "trsvcid": "50180" 00:16:35.126 }, 00:16:35.126 "auth": { 00:16:35.126 "state": "completed", 00:16:35.126 "digest": "sha256", 00:16:35.126 "dhgroup": "ffdhe2048" 00:16:35.126 } 00:16:35.126 } 00:16:35.126 ]' 00:16:35.126 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:35.126 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:35.126 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:35.385 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:35.385 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:35.385 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:35.385 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:35.385 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:35.385 12:02:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjY5NmY5NDUxZThjYThlODgwZDhmZTUxMjVmZmEyMjLroygh: --dhchap-ctrl-secret DHHC-1:02:ZGU1MTRjMTRiYjY4NjFmNzI0MGQ0OWFiODg2ZTVhMWQyMzQ1MDRlM2VlYjBkZWQ0CdORWg==: 00:16:35.953 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:35.953 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:35.953 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:35.953 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:35.953 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:35.953 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:35.953 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:35.953 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:35.953 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:36.212 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 2 00:16:36.212 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:36.212 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:36.212 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:36.212 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:36.212 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:36.212 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.212 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.212 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.212 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.212 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.212 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:36.470 00:16:36.470 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:36.470 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:36.470 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:36.729 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:36.729 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:36.729 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:36.729 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:36.729 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:36.729 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:36.729 { 00:16:36.729 "cntlid": 13, 00:16:36.729 "qid": 0, 00:16:36.729 "state": "enabled", 00:16:36.729 "thread": "nvmf_tgt_poll_group_000", 00:16:36.729 "listen_address": { 00:16:36.729 "trtype": "TCP", 00:16:36.729 "adrfam": "IPv4", 00:16:36.729 "traddr": "10.0.0.2", 00:16:36.729 "trsvcid": "4420" 00:16:36.729 }, 00:16:36.729 "peer_address": { 00:16:36.729 "trtype": "TCP", 00:16:36.729 "adrfam": "IPv4", 00:16:36.729 "traddr": "10.0.0.1", 00:16:36.729 "trsvcid": "50206" 00:16:36.729 }, 00:16:36.729 "auth": { 00:16:36.729 "state": "completed", 00:16:36.729 "digest": "sha256", 00:16:36.729 "dhgroup": "ffdhe2048" 00:16:36.729 } 00:16:36.729 } 00:16:36.729 ]' 00:16:36.729 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:36.729 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:36.729 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:36.729 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:36.729 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:36.729 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:36.729 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:36.729 12:02:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:36.988 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTY0Mzk5YTBhN2YxODI1Y2EwNDcwNjg1Nzg4ZjE2Mjg0MGZiNzhjNTM5NTM3ZjFiPPhUJw==: --dhchap-ctrl-secret DHHC-1:01:MTcxMmViOWQ5N2UxMWEwZjRlYzhiYjVjMDMyN2RmYjQJGMWy: 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:37.556 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe2048 3 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:37.556 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.815 12:02:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:37.815 00:16:37.815 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:37.815 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:37.815 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:38.074 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:38.074 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:38.074 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.074 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.074 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.074 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:38.074 { 00:16:38.074 "cntlid": 15, 00:16:38.074 "qid": 0, 00:16:38.074 "state": "enabled", 00:16:38.074 "thread": "nvmf_tgt_poll_group_000", 00:16:38.074 "listen_address": { 00:16:38.074 "trtype": "TCP", 00:16:38.074 "adrfam": "IPv4", 00:16:38.074 "traddr": "10.0.0.2", 00:16:38.074 "trsvcid": "4420" 00:16:38.074 }, 00:16:38.074 "peer_address": { 00:16:38.074 "trtype": "TCP", 00:16:38.074 "adrfam": "IPv4", 00:16:38.074 "traddr": "10.0.0.1", 00:16:38.074 "trsvcid": "50234" 00:16:38.074 }, 00:16:38.074 "auth": { 00:16:38.074 "state": "completed", 00:16:38.074 "digest": "sha256", 00:16:38.074 "dhgroup": "ffdhe2048" 00:16:38.074 } 00:16:38.074 } 00:16:38.074 ]' 00:16:38.074 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:38.074 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:38.074 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:38.074 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:16:38.074 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:38.333 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:38.333 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:38.333 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:38.333 12:02:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:16:38.901 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:38.901 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:38.901 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:38.901 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:38.901 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:38.901 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:38.901 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:38.901 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:38.901 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:38.901 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:39.160 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 0 00:16:39.160 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:39.160 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:39.160 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:39.160 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:39.160 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:39.160 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.160 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.160 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.160 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.160 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.160 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:39.419 00:16:39.419 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:39.419 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:39.419 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:39.678 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:39.678 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:39.678 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:39.678 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:39.678 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:39.678 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:39.678 { 00:16:39.678 "cntlid": 17, 00:16:39.678 "qid": 0, 00:16:39.678 "state": "enabled", 00:16:39.678 "thread": "nvmf_tgt_poll_group_000", 00:16:39.678 "listen_address": { 00:16:39.678 "trtype": "TCP", 00:16:39.678 "adrfam": "IPv4", 00:16:39.678 "traddr": "10.0.0.2", 00:16:39.678 "trsvcid": "4420" 00:16:39.678 }, 00:16:39.678 "peer_address": { 00:16:39.678 "trtype": "TCP", 00:16:39.678 "adrfam": "IPv4", 00:16:39.678 "traddr": "10.0.0.1", 00:16:39.678 "trsvcid": "46016" 00:16:39.678 }, 00:16:39.678 "auth": { 00:16:39.678 "state": "completed", 00:16:39.678 "digest": "sha256", 00:16:39.678 "dhgroup": "ffdhe3072" 00:16:39.678 } 00:16:39.678 } 00:16:39.678 ]' 00:16:39.678 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:39.678 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:39.678 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:39.678 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:39.678 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:39.679 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:39.679 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:39.679 12:02:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:39.938 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:40.506 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 1 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.506 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:40.766 00:16:40.766 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:40.766 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:40.766 12:02:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:41.027 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:41.027 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:41.027 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.027 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.027 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.027 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:41.027 { 00:16:41.027 "cntlid": 19, 00:16:41.027 "qid": 0, 00:16:41.027 "state": "enabled", 00:16:41.027 "thread": "nvmf_tgt_poll_group_000", 00:16:41.027 "listen_address": { 00:16:41.027 "trtype": "TCP", 00:16:41.027 "adrfam": "IPv4", 00:16:41.027 "traddr": "10.0.0.2", 00:16:41.027 "trsvcid": "4420" 00:16:41.027 }, 00:16:41.027 "peer_address": { 00:16:41.027 "trtype": "TCP", 00:16:41.027 "adrfam": "IPv4", 00:16:41.027 "traddr": "10.0.0.1", 00:16:41.027 "trsvcid": "46046" 00:16:41.027 }, 00:16:41.027 "auth": { 00:16:41.027 "state": "completed", 00:16:41.027 "digest": "sha256", 00:16:41.027 "dhgroup": "ffdhe3072" 00:16:41.027 } 00:16:41.027 } 00:16:41.027 ]' 00:16:41.027 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:41.027 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:41.027 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:41.027 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:41.027 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:41.286 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:41.286 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:41.286 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:41.286 12:02:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjY5NmY5NDUxZThjYThlODgwZDhmZTUxMjVmZmEyMjLroygh: --dhchap-ctrl-secret DHHC-1:02:ZGU1MTRjMTRiYjY4NjFmNzI0MGQ0OWFiODg2ZTVhMWQyMzQ1MDRlM2VlYjBkZWQ0CdORWg==: 00:16:41.854 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:41.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:41.854 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:41.854 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:41.854 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:41.854 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:41.854 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:41.854 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:41.854 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:42.113 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 2 00:16:42.113 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:42.113 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:42.113 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:42.113 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:42.113 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:42.113 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.113 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.113 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.114 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.114 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.114 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:42.372 00:16:42.372 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:42.372 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:42.372 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:42.631 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:42.631 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:42.631 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:42.631 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:42.631 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:42.631 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:42.631 { 00:16:42.631 "cntlid": 21, 00:16:42.631 "qid": 0, 00:16:42.631 "state": "enabled", 00:16:42.631 "thread": "nvmf_tgt_poll_group_000", 00:16:42.631 "listen_address": { 00:16:42.631 "trtype": "TCP", 00:16:42.631 "adrfam": "IPv4", 00:16:42.631 "traddr": "10.0.0.2", 00:16:42.631 "trsvcid": "4420" 00:16:42.631 }, 00:16:42.631 "peer_address": { 00:16:42.631 "trtype": "TCP", 00:16:42.631 "adrfam": "IPv4", 00:16:42.631 "traddr": "10.0.0.1", 00:16:42.631 "trsvcid": "46072" 00:16:42.631 }, 00:16:42.631 "auth": { 00:16:42.631 "state": "completed", 00:16:42.631 "digest": "sha256", 00:16:42.631 "dhgroup": "ffdhe3072" 00:16:42.631 } 00:16:42.631 } 00:16:42.631 ]' 00:16:42.631 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:42.631 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:42.631 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:42.631 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:42.631 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:42.631 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:42.631 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:42.631 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:42.890 12:02:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTY0Mzk5YTBhN2YxODI1Y2EwNDcwNjg1Nzg4ZjE2Mjg0MGZiNzhjNTM5NTM3ZjFiPPhUJw==: --dhchap-ctrl-secret DHHC-1:01:MTcxMmViOWQ5N2UxMWEwZjRlYzhiYjVjMDMyN2RmYjQJGMWy: 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:43.458 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe3072 3 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.458 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:43.778 00:16:43.778 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:43.778 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:43.778 12:02:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:44.036 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:44.036 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:44.036 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.036 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.036 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.036 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:44.036 { 00:16:44.036 "cntlid": 23, 00:16:44.036 "qid": 0, 00:16:44.036 "state": "enabled", 00:16:44.036 "thread": "nvmf_tgt_poll_group_000", 00:16:44.036 "listen_address": { 00:16:44.036 "trtype": "TCP", 00:16:44.036 "adrfam": "IPv4", 00:16:44.036 "traddr": "10.0.0.2", 00:16:44.036 "trsvcid": "4420" 00:16:44.036 }, 00:16:44.036 "peer_address": { 00:16:44.036 "trtype": "TCP", 00:16:44.036 "adrfam": "IPv4", 00:16:44.036 "traddr": "10.0.0.1", 00:16:44.036 "trsvcid": "46108" 00:16:44.036 }, 00:16:44.036 "auth": { 00:16:44.036 "state": "completed", 00:16:44.036 "digest": "sha256", 00:16:44.036 "dhgroup": "ffdhe3072" 00:16:44.036 } 00:16:44.036 } 00:16:44.036 ]' 00:16:44.036 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:44.036 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:44.036 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:44.036 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:16:44.037 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:44.037 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:44.037 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:44.037 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:44.295 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:16:44.863 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:44.863 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:44.863 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:44.863 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:44.863 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:44.863 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:44.863 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:44.863 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:44.863 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.863 12:02:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:44.863 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 0 00:16:44.863 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:44.863 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:44.863 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:45.122 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:45.122 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:45.122 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.122 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.122 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.122 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.122 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.122 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:45.380 00:16:45.380 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:45.380 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:45.380 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:45.380 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:45.380 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:45.380 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:45.380 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:45.380 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:45.380 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:45.380 { 00:16:45.380 "cntlid": 25, 00:16:45.380 "qid": 0, 00:16:45.380 "state": "enabled", 00:16:45.380 "thread": "nvmf_tgt_poll_group_000", 00:16:45.380 "listen_address": { 00:16:45.380 "trtype": "TCP", 00:16:45.380 "adrfam": "IPv4", 00:16:45.380 "traddr": "10.0.0.2", 00:16:45.380 "trsvcid": "4420" 00:16:45.380 }, 00:16:45.380 "peer_address": { 00:16:45.380 "trtype": "TCP", 00:16:45.380 "adrfam": "IPv4", 00:16:45.380 "traddr": "10.0.0.1", 00:16:45.380 "trsvcid": "46130" 00:16:45.380 }, 00:16:45.380 "auth": { 00:16:45.380 "state": "completed", 00:16:45.380 "digest": "sha256", 00:16:45.380 "dhgroup": "ffdhe4096" 00:16:45.380 } 00:16:45.380 } 00:16:45.380 ]' 00:16:45.380 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:45.380 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:45.380 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:45.639 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:45.639 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:45.639 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:45.639 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:45.639 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:45.639 12:02:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:16:46.205 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:46.205 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:46.205 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:46.205 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.205 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.205 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.205 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:46.205 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.205 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:46.464 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 1 00:16:46.464 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:46.464 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:46.464 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:46.464 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:46.464 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:46.464 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.464 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.464 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.464 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.464 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.464 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:46.722 00:16:46.722 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:46.722 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:46.722 12:02:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:46.980 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:46.980 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:46.981 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:46.981 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:46.981 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:46.981 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:46.981 { 00:16:46.981 "cntlid": 27, 00:16:46.981 "qid": 0, 00:16:46.981 "state": "enabled", 00:16:46.981 "thread": "nvmf_tgt_poll_group_000", 00:16:46.981 "listen_address": { 00:16:46.981 "trtype": "TCP", 00:16:46.981 "adrfam": "IPv4", 00:16:46.981 "traddr": "10.0.0.2", 00:16:46.981 "trsvcid": "4420" 00:16:46.981 }, 00:16:46.981 "peer_address": { 00:16:46.981 "trtype": "TCP", 00:16:46.981 "adrfam": "IPv4", 00:16:46.981 "traddr": "10.0.0.1", 00:16:46.981 "trsvcid": "46162" 00:16:46.981 }, 00:16:46.981 "auth": { 00:16:46.981 "state": "completed", 00:16:46.981 "digest": "sha256", 00:16:46.981 "dhgroup": "ffdhe4096" 00:16:46.981 } 00:16:46.981 } 00:16:46.981 ]' 00:16:46.981 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:46.981 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:46.981 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:46.981 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:46.981 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:46.981 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:46.981 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:46.981 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:47.239 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjY5NmY5NDUxZThjYThlODgwZDhmZTUxMjVmZmEyMjLroygh: --dhchap-ctrl-secret DHHC-1:02:ZGU1MTRjMTRiYjY4NjFmNzI0MGQ0OWFiODg2ZTVhMWQyMzQ1MDRlM2VlYjBkZWQ0CdORWg==: 00:16:47.806 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:47.806 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:47.806 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:47.806 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:47.806 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:47.806 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:47.807 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:47.807 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:47.807 12:02:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:48.065 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 2 00:16:48.065 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:48.065 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:48.065 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:48.065 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:48.065 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:48.065 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.066 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.066 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.066 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.066 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.066 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:48.324 00:16:48.324 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:48.324 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:48.324 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:48.324 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:48.324 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:48.324 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:48.324 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:48.324 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:48.324 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:48.324 { 00:16:48.324 "cntlid": 29, 00:16:48.324 "qid": 0, 00:16:48.324 "state": "enabled", 00:16:48.324 "thread": "nvmf_tgt_poll_group_000", 00:16:48.324 "listen_address": { 00:16:48.324 "trtype": "TCP", 00:16:48.324 "adrfam": "IPv4", 00:16:48.324 "traddr": "10.0.0.2", 00:16:48.324 "trsvcid": "4420" 00:16:48.324 }, 00:16:48.324 "peer_address": { 00:16:48.324 "trtype": "TCP", 00:16:48.324 "adrfam": "IPv4", 00:16:48.324 "traddr": "10.0.0.1", 00:16:48.324 "trsvcid": "46198" 00:16:48.324 }, 00:16:48.324 "auth": { 00:16:48.324 "state": "completed", 00:16:48.324 "digest": "sha256", 00:16:48.324 "dhgroup": "ffdhe4096" 00:16:48.324 } 00:16:48.324 } 00:16:48.324 ]' 00:16:48.324 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:48.583 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:48.583 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:48.583 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:48.583 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:48.583 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:48.583 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:48.583 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:48.842 12:02:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTY0Mzk5YTBhN2YxODI1Y2EwNDcwNjg1Nzg4ZjE2Mjg0MGZiNzhjNTM5NTM3ZjFiPPhUJw==: --dhchap-ctrl-secret DHHC-1:01:MTcxMmViOWQ5N2UxMWEwZjRlYzhiYjVjMDMyN2RmYjQJGMWy: 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:49.409 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe4096 3 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.409 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:49.668 00:16:49.668 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:49.668 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:49.668 12:02:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:49.927 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:49.927 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:49.927 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:49.927 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:49.927 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:49.927 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:49.927 { 00:16:49.927 "cntlid": 31, 00:16:49.927 "qid": 0, 00:16:49.927 "state": "enabled", 00:16:49.927 "thread": "nvmf_tgt_poll_group_000", 00:16:49.927 "listen_address": { 00:16:49.927 "trtype": "TCP", 00:16:49.927 "adrfam": "IPv4", 00:16:49.927 "traddr": "10.0.0.2", 00:16:49.927 "trsvcid": "4420" 00:16:49.927 }, 00:16:49.927 "peer_address": { 00:16:49.927 "trtype": "TCP", 00:16:49.927 "adrfam": "IPv4", 00:16:49.927 "traddr": "10.0.0.1", 00:16:49.927 "trsvcid": "60162" 00:16:49.927 }, 00:16:49.927 "auth": { 00:16:49.927 "state": "completed", 00:16:49.927 "digest": "sha256", 00:16:49.927 "dhgroup": "ffdhe4096" 00:16:49.927 } 00:16:49.927 } 00:16:49.927 ]' 00:16:49.927 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:49.927 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:49.927 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:49.927 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:16:49.927 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:49.927 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:49.927 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:49.927 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:50.185 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:16:50.752 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:50.752 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:50.752 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:50.752 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:50.752 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:50.752 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:50.752 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:50.752 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:50.752 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:50.752 12:02:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:51.011 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 0 00:16:51.011 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:51.011 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:51.011 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:51.011 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:51.011 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:51.011 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.011 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.011 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.011 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.011 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.011 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:51.269 00:16:51.269 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:51.269 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:51.269 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:51.528 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:51.528 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:51.528 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:51.528 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:51.528 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:51.528 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:51.528 { 00:16:51.528 "cntlid": 33, 00:16:51.528 "qid": 0, 00:16:51.528 "state": "enabled", 00:16:51.528 "thread": "nvmf_tgt_poll_group_000", 00:16:51.528 "listen_address": { 00:16:51.528 "trtype": "TCP", 00:16:51.528 "adrfam": "IPv4", 00:16:51.528 "traddr": "10.0.0.2", 00:16:51.528 "trsvcid": "4420" 00:16:51.528 }, 00:16:51.528 "peer_address": { 00:16:51.528 "trtype": "TCP", 00:16:51.528 "adrfam": "IPv4", 00:16:51.528 "traddr": "10.0.0.1", 00:16:51.528 "trsvcid": "60180" 00:16:51.528 }, 00:16:51.528 "auth": { 00:16:51.528 "state": "completed", 00:16:51.528 "digest": "sha256", 00:16:51.528 "dhgroup": "ffdhe6144" 00:16:51.528 } 00:16:51.528 } 00:16:51.528 ]' 00:16:51.528 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:51.528 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:51.528 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:51.528 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:51.528 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:51.528 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:51.528 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:51.528 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:51.786 12:02:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:52.354 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 1 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.354 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:52.922 00:16:52.922 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:52.922 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:52.922 12:02:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:52.922 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:52.922 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:52.922 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:52.922 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:52.922 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:52.922 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:52.922 { 00:16:52.922 "cntlid": 35, 00:16:52.922 "qid": 0, 00:16:52.922 "state": "enabled", 00:16:52.922 "thread": "nvmf_tgt_poll_group_000", 00:16:52.922 "listen_address": { 00:16:52.922 "trtype": "TCP", 00:16:52.922 "adrfam": "IPv4", 00:16:52.922 "traddr": "10.0.0.2", 00:16:52.922 "trsvcid": "4420" 00:16:52.923 }, 00:16:52.923 "peer_address": { 00:16:52.923 "trtype": "TCP", 00:16:52.923 "adrfam": "IPv4", 00:16:52.923 "traddr": "10.0.0.1", 00:16:52.923 "trsvcid": "60216" 00:16:52.923 }, 00:16:52.923 "auth": { 00:16:52.923 "state": "completed", 00:16:52.923 "digest": "sha256", 00:16:52.923 "dhgroup": "ffdhe6144" 00:16:52.923 } 00:16:52.923 } 00:16:52.923 ]' 00:16:52.923 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:52.923 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:52.923 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:53.182 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:53.182 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:53.182 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:53.182 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:53.182 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:53.440 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjY5NmY5NDUxZThjYThlODgwZDhmZTUxMjVmZmEyMjLroygh: --dhchap-ctrl-secret DHHC-1:02:ZGU1MTRjMTRiYjY4NjFmNzI0MGQ0OWFiODg2ZTVhMWQyMzQ1MDRlM2VlYjBkZWQ0CdORWg==: 00:16:54.008 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:54.008 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:54.008 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:54.008 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.008 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.008 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.008 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:54.008 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.008 12:02:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:54.008 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 2 00:16:54.008 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:54.008 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:54.008 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:54.008 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:16:54.008 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:54.008 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.008 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.008 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.008 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.008 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.008 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:16:54.267 00:16:54.524 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:54.524 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:54.524 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:54.524 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:54.524 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:54.524 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:54.524 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:54.524 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:54.524 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:54.524 { 00:16:54.524 "cntlid": 37, 00:16:54.524 "qid": 0, 00:16:54.524 "state": "enabled", 00:16:54.524 "thread": "nvmf_tgt_poll_group_000", 00:16:54.524 "listen_address": { 00:16:54.524 "trtype": "TCP", 00:16:54.524 "adrfam": "IPv4", 00:16:54.524 "traddr": "10.0.0.2", 00:16:54.524 "trsvcid": "4420" 00:16:54.524 }, 00:16:54.524 "peer_address": { 00:16:54.524 "trtype": "TCP", 00:16:54.524 "adrfam": "IPv4", 00:16:54.524 "traddr": "10.0.0.1", 00:16:54.524 "trsvcid": "60232" 00:16:54.524 }, 00:16:54.524 "auth": { 00:16:54.524 "state": "completed", 00:16:54.524 "digest": "sha256", 00:16:54.524 "dhgroup": "ffdhe6144" 00:16:54.524 } 00:16:54.524 } 00:16:54.524 ]' 00:16:54.524 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:54.524 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:54.524 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:54.783 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:54.783 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:54.783 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:54.783 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:54.783 12:02:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:54.783 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTY0Mzk5YTBhN2YxODI1Y2EwNDcwNjg1Nzg4ZjE2Mjg0MGZiNzhjNTM5NTM3ZjFiPPhUJw==: --dhchap-ctrl-secret DHHC-1:01:MTcxMmViOWQ5N2UxMWEwZjRlYzhiYjVjMDMyN2RmYjQJGMWy: 00:16:55.351 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:55.351 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:55.351 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:55.351 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.351 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.351 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.351 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:55.351 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.351 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:16:55.609 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe6144 3 00:16:55.609 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:55.609 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:55.609 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:16:55.609 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:16:55.609 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:55.609 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:16:55.609 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:55.609 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:55.609 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:55.609 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.609 12:02:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:16:55.867 00:16:55.867 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:55.867 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:55.867 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:56.125 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:56.125 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:56.125 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.125 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.125 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.125 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:56.125 { 00:16:56.126 "cntlid": 39, 00:16:56.126 "qid": 0, 00:16:56.126 "state": "enabled", 00:16:56.126 "thread": "nvmf_tgt_poll_group_000", 00:16:56.126 "listen_address": { 00:16:56.126 "trtype": "TCP", 00:16:56.126 "adrfam": "IPv4", 00:16:56.126 "traddr": "10.0.0.2", 00:16:56.126 "trsvcid": "4420" 00:16:56.126 }, 00:16:56.126 "peer_address": { 00:16:56.126 "trtype": "TCP", 00:16:56.126 "adrfam": "IPv4", 00:16:56.126 "traddr": "10.0.0.1", 00:16:56.126 "trsvcid": "60254" 00:16:56.126 }, 00:16:56.126 "auth": { 00:16:56.126 "state": "completed", 00:16:56.126 "digest": "sha256", 00:16:56.126 "dhgroup": "ffdhe6144" 00:16:56.126 } 00:16:56.126 } 00:16:56.126 ]' 00:16:56.126 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:56.126 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:56.126 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:56.385 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:16:56.385 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:56.385 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:56.385 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:56.385 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:56.385 12:02:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:16:56.959 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:56.959 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:56.959 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:56.959 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:56.959 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:56.959 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:56.959 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:16:56.959 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:56.959 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:56.959 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:57.217 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 0 00:16:57.217 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:57.217 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:57.217 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:57.217 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:16:57.217 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:57.217 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.217 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.217 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.217 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.217 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.217 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:16:57.785 00:16:57.785 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:57.785 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:57.785 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:57.785 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:57.785 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:57.785 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:57.785 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:57.785 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:57.785 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:57.785 { 00:16:57.785 "cntlid": 41, 00:16:57.785 "qid": 0, 00:16:57.785 "state": "enabled", 00:16:57.785 "thread": "nvmf_tgt_poll_group_000", 00:16:57.785 "listen_address": { 00:16:57.785 "trtype": "TCP", 00:16:57.785 "adrfam": "IPv4", 00:16:57.785 "traddr": "10.0.0.2", 00:16:57.785 "trsvcid": "4420" 00:16:57.785 }, 00:16:57.785 "peer_address": { 00:16:57.785 "trtype": "TCP", 00:16:57.785 "adrfam": "IPv4", 00:16:57.785 "traddr": "10.0.0.1", 00:16:57.785 "trsvcid": "60294" 00:16:57.785 }, 00:16:57.785 "auth": { 00:16:57.785 "state": "completed", 00:16:57.785 "digest": "sha256", 00:16:57.785 "dhgroup": "ffdhe8192" 00:16:57.785 } 00:16:57.785 } 00:16:57.785 ]' 00:16:57.785 12:02:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:57.785 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:57.785 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:58.102 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:58.102 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:58.102 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:58.102 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:58.102 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:16:58.102 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:16:58.669 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:16:58.669 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:16:58.669 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:16:58.669 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.669 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.669 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.669 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:16:58.669 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.669 12:02:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:16:58.928 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 1 00:16:58.928 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:16:58.928 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:16:58.928 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:16:58.928 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:16:58.928 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:16:58.928 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.928 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:58.928 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:58.928 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:58.928 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:58.928 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:16:59.494 00:16:59.494 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:16:59.494 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:16:59.494 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:16:59.494 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:16:59.494 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:16:59.494 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:16:59.494 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:16:59.494 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:16:59.494 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:16:59.494 { 00:16:59.494 "cntlid": 43, 00:16:59.494 "qid": 0, 00:16:59.494 "state": "enabled", 00:16:59.494 "thread": "nvmf_tgt_poll_group_000", 00:16:59.494 "listen_address": { 00:16:59.494 "trtype": "TCP", 00:16:59.494 "adrfam": "IPv4", 00:16:59.494 "traddr": "10.0.0.2", 00:16:59.494 "trsvcid": "4420" 00:16:59.494 }, 00:16:59.494 "peer_address": { 00:16:59.494 "trtype": "TCP", 00:16:59.494 "adrfam": "IPv4", 00:16:59.494 "traddr": "10.0.0.1", 00:16:59.494 "trsvcid": "60322" 00:16:59.494 }, 00:16:59.494 "auth": { 00:16:59.494 "state": "completed", 00:16:59.494 "digest": "sha256", 00:16:59.494 "dhgroup": "ffdhe8192" 00:16:59.494 } 00:16:59.494 } 00:16:59.494 ]' 00:16:59.494 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:16:59.494 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:16:59.753 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:16:59.753 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:16:59.753 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:16:59.753 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:16:59.753 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:16:59.753 12:02:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:00.012 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjY5NmY5NDUxZThjYThlODgwZDhmZTUxMjVmZmEyMjLroygh: --dhchap-ctrl-secret DHHC-1:02:ZGU1MTRjMTRiYjY4NjFmNzI0MGQ0OWFiODg2ZTVhMWQyMzQ1MDRlM2VlYjBkZWQ0CdORWg==: 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:00.581 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 2 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:00.581 12:02:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:01.151 00:17:01.151 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:01.151 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:01.151 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:01.411 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:01.411 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:01.411 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:01.411 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:01.411 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:01.411 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:01.411 { 00:17:01.411 "cntlid": 45, 00:17:01.411 "qid": 0, 00:17:01.411 "state": "enabled", 00:17:01.411 "thread": "nvmf_tgt_poll_group_000", 00:17:01.411 "listen_address": { 00:17:01.411 "trtype": "TCP", 00:17:01.411 "adrfam": "IPv4", 00:17:01.411 "traddr": "10.0.0.2", 00:17:01.411 "trsvcid": "4420" 00:17:01.411 }, 00:17:01.411 "peer_address": { 00:17:01.411 "trtype": "TCP", 00:17:01.411 "adrfam": "IPv4", 00:17:01.411 "traddr": "10.0.0.1", 00:17:01.411 "trsvcid": "53052" 00:17:01.411 }, 00:17:01.411 "auth": { 00:17:01.411 "state": "completed", 00:17:01.411 "digest": "sha256", 00:17:01.411 "dhgroup": "ffdhe8192" 00:17:01.411 } 00:17:01.411 } 00:17:01.411 ]' 00:17:01.411 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:01.411 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:01.411 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:01.411 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:01.411 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:01.411 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:01.411 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:01.411 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:01.670 12:02:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTY0Mzk5YTBhN2YxODI1Y2EwNDcwNjg1Nzg4ZjE2Mjg0MGZiNzhjNTM5NTM3ZjFiPPhUJw==: --dhchap-ctrl-secret DHHC-1:01:MTcxMmViOWQ5N2UxMWEwZjRlYzhiYjVjMDMyN2RmYjQJGMWy: 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:02.239 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha256 ffdhe8192 3 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha256 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.239 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:02.805 00:17:02.805 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:02.805 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:02.805 12:02:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:03.063 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:03.063 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:03.063 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.063 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.063 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.063 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:03.063 { 00:17:03.063 "cntlid": 47, 00:17:03.063 "qid": 0, 00:17:03.063 "state": "enabled", 00:17:03.063 "thread": "nvmf_tgt_poll_group_000", 00:17:03.063 "listen_address": { 00:17:03.063 "trtype": "TCP", 00:17:03.063 "adrfam": "IPv4", 00:17:03.063 "traddr": "10.0.0.2", 00:17:03.063 "trsvcid": "4420" 00:17:03.063 }, 00:17:03.063 "peer_address": { 00:17:03.063 "trtype": "TCP", 00:17:03.063 "adrfam": "IPv4", 00:17:03.063 "traddr": "10.0.0.1", 00:17:03.063 "trsvcid": "53092" 00:17:03.063 }, 00:17:03.063 "auth": { 00:17:03.063 "state": "completed", 00:17:03.063 "digest": "sha256", 00:17:03.063 "dhgroup": "ffdhe8192" 00:17:03.063 } 00:17:03.063 } 00:17:03.063 ]' 00:17:03.063 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:03.063 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha256 == \s\h\a\2\5\6 ]] 00:17:03.063 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:03.063 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:03.063 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:03.063 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:03.063 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:03.063 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:03.321 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:17:03.888 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:03.888 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:03.888 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:03.888 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.888 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.888 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.888 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:03.888 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:03.888 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:03.888 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.888 12:02:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:03.888 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 0 00:17:03.888 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:03.888 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:03.888 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:03.888 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:03.888 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:03.888 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.888 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:03.888 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:03.888 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:03.888 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:03.889 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:04.147 00:17:04.147 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:04.147 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:04.147 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:04.407 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:04.407 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:04.407 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:04.407 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:04.407 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:04.407 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:04.407 { 00:17:04.407 "cntlid": 49, 00:17:04.407 "qid": 0, 00:17:04.407 "state": "enabled", 00:17:04.407 "thread": "nvmf_tgt_poll_group_000", 00:17:04.407 "listen_address": { 00:17:04.407 "trtype": "TCP", 00:17:04.407 "adrfam": "IPv4", 00:17:04.407 "traddr": "10.0.0.2", 00:17:04.407 "trsvcid": "4420" 00:17:04.407 }, 00:17:04.407 "peer_address": { 00:17:04.407 "trtype": "TCP", 00:17:04.407 "adrfam": "IPv4", 00:17:04.407 "traddr": "10.0.0.1", 00:17:04.407 "trsvcid": "53106" 00:17:04.407 }, 00:17:04.407 "auth": { 00:17:04.407 "state": "completed", 00:17:04.407 "digest": "sha384", 00:17:04.407 "dhgroup": "null" 00:17:04.407 } 00:17:04.407 } 00:17:04.407 ]' 00:17:04.407 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:04.407 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:04.407 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:04.407 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:04.407 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:04.407 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:04.407 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:04.407 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:04.666 12:02:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:17:05.235 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:05.235 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:05.235 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:05.235 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.235 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.235 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.235 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:05.235 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.235 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:05.495 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 1 00:17:05.495 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:05.495 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:05.495 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:05.495 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:05.495 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:05.495 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.495 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.495 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.495 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.495 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.495 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:05.754 00:17:05.754 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:05.754 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:05.754 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:05.754 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:05.754 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:05.754 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:05.754 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:05.754 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:05.754 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:05.754 { 00:17:05.754 "cntlid": 51, 00:17:05.754 "qid": 0, 00:17:05.754 "state": "enabled", 00:17:05.754 "thread": "nvmf_tgt_poll_group_000", 00:17:05.754 "listen_address": { 00:17:05.754 "trtype": "TCP", 00:17:05.754 "adrfam": "IPv4", 00:17:05.754 "traddr": "10.0.0.2", 00:17:05.754 "trsvcid": "4420" 00:17:05.754 }, 00:17:05.754 "peer_address": { 00:17:05.754 "trtype": "TCP", 00:17:05.754 "adrfam": "IPv4", 00:17:05.754 "traddr": "10.0.0.1", 00:17:05.754 "trsvcid": "53136" 00:17:05.754 }, 00:17:05.754 "auth": { 00:17:05.754 "state": "completed", 00:17:05.754 "digest": "sha384", 00:17:05.754 "dhgroup": "null" 00:17:05.754 } 00:17:05.754 } 00:17:05.754 ]' 00:17:05.754 12:02:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:06.015 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:06.015 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:06.015 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:06.015 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:06.015 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:06.015 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:06.015 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:06.280 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjY5NmY5NDUxZThjYThlODgwZDhmZTUxMjVmZmEyMjLroygh: --dhchap-ctrl-secret DHHC-1:02:ZGU1MTRjMTRiYjY4NjFmNzI0MGQ0OWFiODg2ZTVhMWQyMzQ1MDRlM2VlYjBkZWQ0CdORWg==: 00:17:06.851 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:06.851 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:06.851 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:06.851 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.851 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.851 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.851 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:06.851 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.851 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:06.851 12:02:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 2 00:17:06.851 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:06.851 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:06.851 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:06.851 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:06.851 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:06.851 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.851 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:06.851 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:06.851 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:06.851 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:06.851 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:07.110 00:17:07.110 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:07.110 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:07.110 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:07.369 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:07.369 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:07.369 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:07.369 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:07.369 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:07.369 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:07.369 { 00:17:07.369 "cntlid": 53, 00:17:07.369 "qid": 0, 00:17:07.369 "state": "enabled", 00:17:07.369 "thread": "nvmf_tgt_poll_group_000", 00:17:07.369 "listen_address": { 00:17:07.369 "trtype": "TCP", 00:17:07.369 "adrfam": "IPv4", 00:17:07.369 "traddr": "10.0.0.2", 00:17:07.369 "trsvcid": "4420" 00:17:07.369 }, 00:17:07.369 "peer_address": { 00:17:07.369 "trtype": "TCP", 00:17:07.369 "adrfam": "IPv4", 00:17:07.369 "traddr": "10.0.0.1", 00:17:07.369 "trsvcid": "53166" 00:17:07.369 }, 00:17:07.369 "auth": { 00:17:07.369 "state": "completed", 00:17:07.369 "digest": "sha384", 00:17:07.369 "dhgroup": "null" 00:17:07.369 } 00:17:07.369 } 00:17:07.369 ]' 00:17:07.369 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:07.369 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:07.369 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:07.369 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:07.369 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:07.369 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:07.369 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:07.369 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:07.628 12:02:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTY0Mzk5YTBhN2YxODI1Y2EwNDcwNjg1Nzg4ZjE2Mjg0MGZiNzhjNTM5NTM3ZjFiPPhUJw==: --dhchap-ctrl-secret DHHC-1:01:MTcxMmViOWQ5N2UxMWEwZjRlYzhiYjVjMDMyN2RmYjQJGMWy: 00:17:08.195 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:08.195 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:08.195 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:08.195 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.195 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.195 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.195 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:08.195 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.195 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups null 00:17:08.453 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 null 3 00:17:08.453 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:08.453 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:08.453 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:08.453 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:08.453 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:08.453 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:08.453 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.453 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.453 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.453 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.453 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:08.453 00:17:08.453 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:08.453 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:08.453 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:08.712 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:08.712 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:08.712 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:08.712 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:08.712 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:08.712 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:08.712 { 00:17:08.712 "cntlid": 55, 00:17:08.712 "qid": 0, 00:17:08.712 "state": "enabled", 00:17:08.712 "thread": "nvmf_tgt_poll_group_000", 00:17:08.712 "listen_address": { 00:17:08.712 "trtype": "TCP", 00:17:08.712 "adrfam": "IPv4", 00:17:08.712 "traddr": "10.0.0.2", 00:17:08.712 "trsvcid": "4420" 00:17:08.712 }, 00:17:08.712 "peer_address": { 00:17:08.712 "trtype": "TCP", 00:17:08.712 "adrfam": "IPv4", 00:17:08.712 "traddr": "10.0.0.1", 00:17:08.712 "trsvcid": "53204" 00:17:08.712 }, 00:17:08.712 "auth": { 00:17:08.712 "state": "completed", 00:17:08.712 "digest": "sha384", 00:17:08.712 "dhgroup": "null" 00:17:08.712 } 00:17:08.712 } 00:17:08.712 ]' 00:17:08.712 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:08.712 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:08.712 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:08.971 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:08.971 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:08.971 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:08.971 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:08.971 12:02:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:08.971 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:17:09.539 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:09.539 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:09.539 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:09.539 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.539 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.539 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.539 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:09.539 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:09.539 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.539 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:09.798 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 0 00:17:09.798 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:09.798 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:09.798 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:09.798 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:09.798 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:09.798 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.798 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:09.798 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:09.798 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:09.798 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:09.798 12:02:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:10.056 00:17:10.056 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:10.056 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:10.056 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:10.315 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:10.316 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:10.316 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:10.316 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:10.316 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:10.316 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:10.316 { 00:17:10.316 "cntlid": 57, 00:17:10.316 "qid": 0, 00:17:10.316 "state": "enabled", 00:17:10.316 "thread": "nvmf_tgt_poll_group_000", 00:17:10.316 "listen_address": { 00:17:10.316 "trtype": "TCP", 00:17:10.316 "adrfam": "IPv4", 00:17:10.316 "traddr": "10.0.0.2", 00:17:10.316 "trsvcid": "4420" 00:17:10.316 }, 00:17:10.316 "peer_address": { 00:17:10.316 "trtype": "TCP", 00:17:10.316 "adrfam": "IPv4", 00:17:10.316 "traddr": "10.0.0.1", 00:17:10.316 "trsvcid": "54898" 00:17:10.316 }, 00:17:10.316 "auth": { 00:17:10.316 "state": "completed", 00:17:10.316 "digest": "sha384", 00:17:10.316 "dhgroup": "ffdhe2048" 00:17:10.316 } 00:17:10.316 } 00:17:10.316 ]' 00:17:10.316 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:10.316 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:10.316 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:10.316 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:10.316 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:10.316 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:10.316 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:10.316 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:10.575 12:02:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:11.143 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 1 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.143 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:11.403 00:17:11.403 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:11.403 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:11.403 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:11.662 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:11.662 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:11.662 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:11.662 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:11.662 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:11.662 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:11.662 { 00:17:11.662 "cntlid": 59, 00:17:11.662 "qid": 0, 00:17:11.662 "state": "enabled", 00:17:11.662 "thread": "nvmf_tgt_poll_group_000", 00:17:11.662 "listen_address": { 00:17:11.662 "trtype": "TCP", 00:17:11.662 "adrfam": "IPv4", 00:17:11.662 "traddr": "10.0.0.2", 00:17:11.662 "trsvcid": "4420" 00:17:11.662 }, 00:17:11.662 "peer_address": { 00:17:11.662 "trtype": "TCP", 00:17:11.662 "adrfam": "IPv4", 00:17:11.662 "traddr": "10.0.0.1", 00:17:11.662 "trsvcid": "54928" 00:17:11.662 }, 00:17:11.662 "auth": { 00:17:11.662 "state": "completed", 00:17:11.662 "digest": "sha384", 00:17:11.662 "dhgroup": "ffdhe2048" 00:17:11.662 } 00:17:11.662 } 00:17:11.662 ]' 00:17:11.662 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:11.662 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:11.662 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:11.662 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:11.662 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:11.662 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:11.662 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:11.662 12:02:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:11.920 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjY5NmY5NDUxZThjYThlODgwZDhmZTUxMjVmZmEyMjLroygh: --dhchap-ctrl-secret DHHC-1:02:ZGU1MTRjMTRiYjY4NjFmNzI0MGQ0OWFiODg2ZTVhMWQyMzQ1MDRlM2VlYjBkZWQ0CdORWg==: 00:17:12.523 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:12.523 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:12.523 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:12.523 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.523 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.523 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.523 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:12.523 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.523 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:12.523 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 2 00:17:12.523 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:12.523 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:12.523 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:12.523 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:12.523 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:12.523 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.782 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:12.782 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:12.782 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:12.782 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.782 12:02:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:12.782 00:17:12.782 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:12.782 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:12.782 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:13.040 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:13.040 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:13.040 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.040 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.040 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.040 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:13.040 { 00:17:13.040 "cntlid": 61, 00:17:13.040 "qid": 0, 00:17:13.040 "state": "enabled", 00:17:13.040 "thread": "nvmf_tgt_poll_group_000", 00:17:13.040 "listen_address": { 00:17:13.040 "trtype": "TCP", 00:17:13.040 "adrfam": "IPv4", 00:17:13.040 "traddr": "10.0.0.2", 00:17:13.040 "trsvcid": "4420" 00:17:13.040 }, 00:17:13.040 "peer_address": { 00:17:13.040 "trtype": "TCP", 00:17:13.040 "adrfam": "IPv4", 00:17:13.040 "traddr": "10.0.0.1", 00:17:13.040 "trsvcid": "54948" 00:17:13.040 }, 00:17:13.040 "auth": { 00:17:13.040 "state": "completed", 00:17:13.040 "digest": "sha384", 00:17:13.040 "dhgroup": "ffdhe2048" 00:17:13.040 } 00:17:13.040 } 00:17:13.040 ]' 00:17:13.040 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:13.040 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:13.040 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:13.298 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:13.298 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:13.298 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:13.298 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:13.298 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:13.298 12:03:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTY0Mzk5YTBhN2YxODI1Y2EwNDcwNjg1Nzg4ZjE2Mjg0MGZiNzhjNTM5NTM3ZjFiPPhUJw==: --dhchap-ctrl-secret DHHC-1:01:MTcxMmViOWQ5N2UxMWEwZjRlYzhiYjVjMDMyN2RmYjQJGMWy: 00:17:13.866 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:13.866 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:13.866 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:13.866 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:13.866 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:13.866 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:13.866 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:13.866 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:13.866 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:17:14.125 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe2048 3 00:17:14.125 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:14.125 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:14.125 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:14.125 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:14.125 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:14.125 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:14.125 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.125 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.125 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.125 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:14.125 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:14.384 00:17:14.384 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:14.384 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:14.384 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:14.643 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:14.643 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:14.643 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:14.643 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:14.643 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:14.643 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:14.643 { 00:17:14.643 "cntlid": 63, 00:17:14.643 "qid": 0, 00:17:14.643 "state": "enabled", 00:17:14.643 "thread": "nvmf_tgt_poll_group_000", 00:17:14.643 "listen_address": { 00:17:14.643 "trtype": "TCP", 00:17:14.643 "adrfam": "IPv4", 00:17:14.643 "traddr": "10.0.0.2", 00:17:14.643 "trsvcid": "4420" 00:17:14.643 }, 00:17:14.643 "peer_address": { 00:17:14.643 "trtype": "TCP", 00:17:14.643 "adrfam": "IPv4", 00:17:14.643 "traddr": "10.0.0.1", 00:17:14.643 "trsvcid": "54980" 00:17:14.643 }, 00:17:14.643 "auth": { 00:17:14.643 "state": "completed", 00:17:14.643 "digest": "sha384", 00:17:14.643 "dhgroup": "ffdhe2048" 00:17:14.643 } 00:17:14.643 } 00:17:14.643 ]' 00:17:14.643 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:14.643 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:14.643 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:14.643 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:14.643 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:14.643 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:14.643 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:14.643 12:03:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:14.901 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:17:15.469 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:15.469 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:15.469 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:15.469 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.469 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.469 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.469 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:15.469 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:15.469 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.469 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:15.728 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 0 00:17:15.728 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:15.728 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:15.728 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:15.728 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:15.728 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:15.728 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.728 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.728 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.728 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.728 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.728 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:15.728 00:17:15.987 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:15.987 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:15.987 12:03:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:15.987 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:15.987 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:15.987 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:15.987 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:15.987 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:15.987 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:15.987 { 00:17:15.987 "cntlid": 65, 00:17:15.987 "qid": 0, 00:17:15.987 "state": "enabled", 00:17:15.987 "thread": "nvmf_tgt_poll_group_000", 00:17:15.987 "listen_address": { 00:17:15.987 "trtype": "TCP", 00:17:15.987 "adrfam": "IPv4", 00:17:15.987 "traddr": "10.0.0.2", 00:17:15.987 "trsvcid": "4420" 00:17:15.987 }, 00:17:15.987 "peer_address": { 00:17:15.987 "trtype": "TCP", 00:17:15.987 "adrfam": "IPv4", 00:17:15.987 "traddr": "10.0.0.1", 00:17:15.987 "trsvcid": "55010" 00:17:15.987 }, 00:17:15.987 "auth": { 00:17:15.987 "state": "completed", 00:17:15.987 "digest": "sha384", 00:17:15.987 "dhgroup": "ffdhe3072" 00:17:15.987 } 00:17:15.987 } 00:17:15.987 ]' 00:17:15.987 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:15.987 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:15.987 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:16.247 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:16.247 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:16.247 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:16.247 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:16.247 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:16.247 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:17:16.815 12:03:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:16.815 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:16.815 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:16.815 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:16.815 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:16.815 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:16.815 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:16.815 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:16.815 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:17.075 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 1 00:17:17.075 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:17.075 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:17.075 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:17.075 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:17.075 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:17.075 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.075 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.075 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.075 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.075 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.075 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:17.334 00:17:17.334 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:17.334 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:17.334 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:17.593 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:17.593 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:17.593 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:17.593 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:17.593 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:17.593 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:17.593 { 00:17:17.593 "cntlid": 67, 00:17:17.593 "qid": 0, 00:17:17.593 "state": "enabled", 00:17:17.593 "thread": "nvmf_tgt_poll_group_000", 00:17:17.593 "listen_address": { 00:17:17.593 "trtype": "TCP", 00:17:17.593 "adrfam": "IPv4", 00:17:17.593 "traddr": "10.0.0.2", 00:17:17.593 "trsvcid": "4420" 00:17:17.593 }, 00:17:17.593 "peer_address": { 00:17:17.593 "trtype": "TCP", 00:17:17.593 "adrfam": "IPv4", 00:17:17.593 "traddr": "10.0.0.1", 00:17:17.593 "trsvcid": "55044" 00:17:17.593 }, 00:17:17.593 "auth": { 00:17:17.593 "state": "completed", 00:17:17.593 "digest": "sha384", 00:17:17.593 "dhgroup": "ffdhe3072" 00:17:17.593 } 00:17:17.593 } 00:17:17.593 ]' 00:17:17.593 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:17.593 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:17.593 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:17.593 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:17.593 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:17.593 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:17.593 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:17.593 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:17.852 12:03:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjY5NmY5NDUxZThjYThlODgwZDhmZTUxMjVmZmEyMjLroygh: --dhchap-ctrl-secret DHHC-1:02:ZGU1MTRjMTRiYjY4NjFmNzI0MGQ0OWFiODg2ZTVhMWQyMzQ1MDRlM2VlYjBkZWQ0CdORWg==: 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:18.420 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 2 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.420 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:18.679 00:17:18.679 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:18.679 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:18.679 12:03:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:18.938 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:18.938 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:18.938 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:18.938 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:18.938 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:18.938 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:18.938 { 00:17:18.938 "cntlid": 69, 00:17:18.938 "qid": 0, 00:17:18.938 "state": "enabled", 00:17:18.938 "thread": "nvmf_tgt_poll_group_000", 00:17:18.938 "listen_address": { 00:17:18.938 "trtype": "TCP", 00:17:18.938 "adrfam": "IPv4", 00:17:18.938 "traddr": "10.0.0.2", 00:17:18.938 "trsvcid": "4420" 00:17:18.938 }, 00:17:18.938 "peer_address": { 00:17:18.938 "trtype": "TCP", 00:17:18.938 "adrfam": "IPv4", 00:17:18.938 "traddr": "10.0.0.1", 00:17:18.938 "trsvcid": "55072" 00:17:18.938 }, 00:17:18.938 "auth": { 00:17:18.938 "state": "completed", 00:17:18.938 "digest": "sha384", 00:17:18.938 "dhgroup": "ffdhe3072" 00:17:18.938 } 00:17:18.938 } 00:17:18.938 ]' 00:17:18.938 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:18.938 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:18.938 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:19.197 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:19.197 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:19.197 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:19.197 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:19.197 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:19.197 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTY0Mzk5YTBhN2YxODI1Y2EwNDcwNjg1Nzg4ZjE2Mjg0MGZiNzhjNTM5NTM3ZjFiPPhUJw==: --dhchap-ctrl-secret DHHC-1:01:MTcxMmViOWQ5N2UxMWEwZjRlYzhiYjVjMDMyN2RmYjQJGMWy: 00:17:19.765 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:19.765 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:19.765 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:19.765 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:19.765 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:19.765 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:19.765 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:19.765 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:19.765 12:03:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:17:20.024 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe3072 3 00:17:20.024 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:20.024 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:20.024 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:20.024 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:20.024 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:20.025 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:20.025 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.025 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.025 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.025 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:20.025 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:20.283 00:17:20.283 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:20.283 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:20.283 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:20.542 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:20.542 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:20.542 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:20.542 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:20.542 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:20.542 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:20.542 { 00:17:20.542 "cntlid": 71, 00:17:20.542 "qid": 0, 00:17:20.542 "state": "enabled", 00:17:20.542 "thread": "nvmf_tgt_poll_group_000", 00:17:20.542 "listen_address": { 00:17:20.542 "trtype": "TCP", 00:17:20.542 "adrfam": "IPv4", 00:17:20.542 "traddr": "10.0.0.2", 00:17:20.542 "trsvcid": "4420" 00:17:20.542 }, 00:17:20.542 "peer_address": { 00:17:20.542 "trtype": "TCP", 00:17:20.542 "adrfam": "IPv4", 00:17:20.542 "traddr": "10.0.0.1", 00:17:20.542 "trsvcid": "59386" 00:17:20.542 }, 00:17:20.542 "auth": { 00:17:20.542 "state": "completed", 00:17:20.542 "digest": "sha384", 00:17:20.542 "dhgroup": "ffdhe3072" 00:17:20.542 } 00:17:20.542 } 00:17:20.542 ]' 00:17:20.542 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:20.542 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:20.542 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:20.542 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:20.542 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:20.542 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:20.542 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:20.543 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:20.802 12:03:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:17:21.370 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:21.370 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:21.370 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:21.370 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.370 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.370 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.370 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:21.370 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:21.370 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.370 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:21.630 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 0 00:17:21.630 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:21.630 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:21.630 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:21.630 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:21.630 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:21.630 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.630 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.630 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.630 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.630 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.630 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:21.890 00:17:21.890 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:21.890 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:21.890 12:03:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:21.890 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:21.890 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:21.890 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:21.890 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:21.890 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:21.890 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:21.890 { 00:17:21.890 "cntlid": 73, 00:17:21.890 "qid": 0, 00:17:21.890 "state": "enabled", 00:17:21.890 "thread": "nvmf_tgt_poll_group_000", 00:17:21.890 "listen_address": { 00:17:21.890 "trtype": "TCP", 00:17:21.890 "adrfam": "IPv4", 00:17:21.890 "traddr": "10.0.0.2", 00:17:21.890 "trsvcid": "4420" 00:17:21.890 }, 00:17:21.890 "peer_address": { 00:17:21.890 "trtype": "TCP", 00:17:21.890 "adrfam": "IPv4", 00:17:21.890 "traddr": "10.0.0.1", 00:17:21.890 "trsvcid": "59414" 00:17:21.890 }, 00:17:21.890 "auth": { 00:17:21.890 "state": "completed", 00:17:21.890 "digest": "sha384", 00:17:21.890 "dhgroup": "ffdhe4096" 00:17:21.890 } 00:17:21.890 } 00:17:21.890 ]' 00:17:21.890 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:22.149 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:22.149 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:22.149 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:22.149 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:22.149 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:22.149 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:22.149 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:22.408 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:17:22.976 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:22.976 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:22.976 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:22.976 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.976 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.976 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.976 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:22.976 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.976 12:03:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:22.976 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 1 00:17:22.976 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:22.976 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:22.976 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:22.976 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:22.976 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:22.976 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.976 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:22.976 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:22.976 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:22.976 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:22.976 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:23.234 00:17:23.234 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:23.234 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:23.234 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:23.493 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:23.493 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:23.493 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:23.493 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:23.493 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:23.493 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:23.493 { 00:17:23.493 "cntlid": 75, 00:17:23.493 "qid": 0, 00:17:23.493 "state": "enabled", 00:17:23.493 "thread": "nvmf_tgt_poll_group_000", 00:17:23.493 "listen_address": { 00:17:23.493 "trtype": "TCP", 00:17:23.493 "adrfam": "IPv4", 00:17:23.493 "traddr": "10.0.0.2", 00:17:23.493 "trsvcid": "4420" 00:17:23.493 }, 00:17:23.493 "peer_address": { 00:17:23.493 "trtype": "TCP", 00:17:23.493 "adrfam": "IPv4", 00:17:23.493 "traddr": "10.0.0.1", 00:17:23.493 "trsvcid": "59446" 00:17:23.493 }, 00:17:23.493 "auth": { 00:17:23.493 "state": "completed", 00:17:23.493 "digest": "sha384", 00:17:23.493 "dhgroup": "ffdhe4096" 00:17:23.493 } 00:17:23.493 } 00:17:23.494 ]' 00:17:23.494 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:23.494 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:23.494 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:23.753 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:23.753 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:23.753 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:23.753 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:23.753 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:23.753 12:03:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjY5NmY5NDUxZThjYThlODgwZDhmZTUxMjVmZmEyMjLroygh: --dhchap-ctrl-secret DHHC-1:02:ZGU1MTRjMTRiYjY4NjFmNzI0MGQ0OWFiODg2ZTVhMWQyMzQ1MDRlM2VlYjBkZWQ0CdORWg==: 00:17:24.321 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:24.321 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:24.321 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:24.321 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.321 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.321 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.321 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:24.321 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.321 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:24.581 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 2 00:17:24.581 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:24.581 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:24.581 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:24.581 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:24.581 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:24.581 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.581 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:24.581 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:24.581 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:24.581 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.581 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:24.841 00:17:24.841 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:24.841 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:24.841 12:03:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:25.101 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:25.101 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:25.101 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.101 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.101 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.101 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:25.101 { 00:17:25.101 "cntlid": 77, 00:17:25.101 "qid": 0, 00:17:25.101 "state": "enabled", 00:17:25.101 "thread": "nvmf_tgt_poll_group_000", 00:17:25.101 "listen_address": { 00:17:25.101 "trtype": "TCP", 00:17:25.101 "adrfam": "IPv4", 00:17:25.101 "traddr": "10.0.0.2", 00:17:25.101 "trsvcid": "4420" 00:17:25.101 }, 00:17:25.101 "peer_address": { 00:17:25.101 "trtype": "TCP", 00:17:25.101 "adrfam": "IPv4", 00:17:25.101 "traddr": "10.0.0.1", 00:17:25.101 "trsvcid": "59474" 00:17:25.101 }, 00:17:25.101 "auth": { 00:17:25.101 "state": "completed", 00:17:25.101 "digest": "sha384", 00:17:25.101 "dhgroup": "ffdhe4096" 00:17:25.101 } 00:17:25.101 } 00:17:25.101 ]' 00:17:25.101 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:25.101 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:25.101 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:25.101 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:25.101 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:25.101 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:25.101 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:25.101 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:25.361 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTY0Mzk5YTBhN2YxODI1Y2EwNDcwNjg1Nzg4ZjE2Mjg0MGZiNzhjNTM5NTM3ZjFiPPhUJw==: --dhchap-ctrl-secret DHHC-1:01:MTcxMmViOWQ5N2UxMWEwZjRlYzhiYjVjMDMyN2RmYjQJGMWy: 00:17:25.929 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:25.929 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:25.929 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:25.929 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.929 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.929 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.929 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:25.929 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.929 12:03:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:17:25.929 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe4096 3 00:17:25.929 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:25.929 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:25.929 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:25.929 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:25.930 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:25.930 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:25.930 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:25.930 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:25.930 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:25.930 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:25.930 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:26.189 00:17:26.189 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:26.189 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:26.189 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:26.448 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:26.448 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:26.448 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:26.448 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:26.448 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:26.448 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:26.448 { 00:17:26.448 "cntlid": 79, 00:17:26.448 "qid": 0, 00:17:26.448 "state": "enabled", 00:17:26.448 "thread": "nvmf_tgt_poll_group_000", 00:17:26.448 "listen_address": { 00:17:26.448 "trtype": "TCP", 00:17:26.448 "adrfam": "IPv4", 00:17:26.448 "traddr": "10.0.0.2", 00:17:26.448 "trsvcid": "4420" 00:17:26.448 }, 00:17:26.448 "peer_address": { 00:17:26.448 "trtype": "TCP", 00:17:26.448 "adrfam": "IPv4", 00:17:26.448 "traddr": "10.0.0.1", 00:17:26.448 "trsvcid": "59498" 00:17:26.448 }, 00:17:26.448 "auth": { 00:17:26.448 "state": "completed", 00:17:26.448 "digest": "sha384", 00:17:26.448 "dhgroup": "ffdhe4096" 00:17:26.448 } 00:17:26.448 } 00:17:26.448 ]' 00:17:26.448 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:26.448 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:26.448 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:26.778 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:26.778 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:26.778 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:26.778 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:26.778 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:26.778 12:03:13 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:17:27.347 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:27.347 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:27.347 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:27.347 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.347 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.347 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.347 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:27.347 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:27.347 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.347 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:27.606 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 0 00:17:27.606 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:27.606 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:27.606 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:27.606 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:27.606 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:27.606 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.606 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:27.606 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:27.606 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:27.606 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.606 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:27.866 00:17:27.866 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:27.866 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:27.866 12:03:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:28.125 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:28.125 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:28.125 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.125 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.125 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.125 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:28.125 { 00:17:28.125 "cntlid": 81, 00:17:28.125 "qid": 0, 00:17:28.125 "state": "enabled", 00:17:28.125 "thread": "nvmf_tgt_poll_group_000", 00:17:28.125 "listen_address": { 00:17:28.125 "trtype": "TCP", 00:17:28.125 "adrfam": "IPv4", 00:17:28.125 "traddr": "10.0.0.2", 00:17:28.125 "trsvcid": "4420" 00:17:28.125 }, 00:17:28.125 "peer_address": { 00:17:28.125 "trtype": "TCP", 00:17:28.125 "adrfam": "IPv4", 00:17:28.125 "traddr": "10.0.0.1", 00:17:28.125 "trsvcid": "59516" 00:17:28.125 }, 00:17:28.125 "auth": { 00:17:28.125 "state": "completed", 00:17:28.125 "digest": "sha384", 00:17:28.125 "dhgroup": "ffdhe6144" 00:17:28.125 } 00:17:28.125 } 00:17:28.125 ]' 00:17:28.125 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:28.125 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:28.125 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:28.125 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:28.125 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:28.125 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:28.125 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:28.125 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:28.384 12:03:15 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:17:28.954 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:28.954 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:28.954 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:28.954 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:28.954 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:28.954 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:28.954 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:28.954 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:28.954 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:29.214 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 1 00:17:29.214 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:29.214 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:29.214 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:29.214 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:29.214 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:29.214 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.214 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.214 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.214 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.214 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.214 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:29.474 00:17:29.474 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:29.474 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:29.474 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:29.733 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:29.733 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:29.733 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:29.733 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:29.733 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:29.733 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:29.733 { 00:17:29.733 "cntlid": 83, 00:17:29.733 "qid": 0, 00:17:29.733 "state": "enabled", 00:17:29.733 "thread": "nvmf_tgt_poll_group_000", 00:17:29.733 "listen_address": { 00:17:29.733 "trtype": "TCP", 00:17:29.733 "adrfam": "IPv4", 00:17:29.733 "traddr": "10.0.0.2", 00:17:29.733 "trsvcid": "4420" 00:17:29.733 }, 00:17:29.733 "peer_address": { 00:17:29.733 "trtype": "TCP", 00:17:29.733 "adrfam": "IPv4", 00:17:29.733 "traddr": "10.0.0.1", 00:17:29.733 "trsvcid": "51606" 00:17:29.733 }, 00:17:29.733 "auth": { 00:17:29.733 "state": "completed", 00:17:29.733 "digest": "sha384", 00:17:29.733 "dhgroup": "ffdhe6144" 00:17:29.733 } 00:17:29.733 } 00:17:29.733 ]' 00:17:29.733 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:29.733 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:29.733 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:29.733 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:29.733 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:29.733 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:29.733 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:29.733 12:03:16 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:29.992 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjY5NmY5NDUxZThjYThlODgwZDhmZTUxMjVmZmEyMjLroygh: --dhchap-ctrl-secret DHHC-1:02:ZGU1MTRjMTRiYjY4NjFmNzI0MGQ0OWFiODg2ZTVhMWQyMzQ1MDRlM2VlYjBkZWQ0CdORWg==: 00:17:30.562 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:30.562 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:30.562 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:30.562 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.562 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.562 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.562 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:30.562 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.562 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:30.821 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 2 00:17:30.821 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:30.821 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:30.821 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:30.822 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:30.822 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:30.822 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.822 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:30.822 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:30.822 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:30.822 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:30.822 12:03:17 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:31.082 00:17:31.082 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:31.082 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:31.082 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:31.341 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:31.342 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:31.342 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:31.342 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:31.342 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:31.342 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:31.342 { 00:17:31.342 "cntlid": 85, 00:17:31.342 "qid": 0, 00:17:31.342 "state": "enabled", 00:17:31.342 "thread": "nvmf_tgt_poll_group_000", 00:17:31.342 "listen_address": { 00:17:31.342 "trtype": "TCP", 00:17:31.342 "adrfam": "IPv4", 00:17:31.342 "traddr": "10.0.0.2", 00:17:31.342 "trsvcid": "4420" 00:17:31.342 }, 00:17:31.342 "peer_address": { 00:17:31.342 "trtype": "TCP", 00:17:31.342 "adrfam": "IPv4", 00:17:31.342 "traddr": "10.0.0.1", 00:17:31.342 "trsvcid": "51620" 00:17:31.342 }, 00:17:31.342 "auth": { 00:17:31.342 "state": "completed", 00:17:31.342 "digest": "sha384", 00:17:31.342 "dhgroup": "ffdhe6144" 00:17:31.342 } 00:17:31.342 } 00:17:31.342 ]' 00:17:31.342 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:31.342 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:31.342 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:31.342 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:31.342 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:31.342 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:31.342 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:31.342 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:31.604 12:03:18 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTY0Mzk5YTBhN2YxODI1Y2EwNDcwNjg1Nzg4ZjE2Mjg0MGZiNzhjNTM5NTM3ZjFiPPhUJw==: --dhchap-ctrl-secret DHHC-1:01:MTcxMmViOWQ5N2UxMWEwZjRlYzhiYjVjMDMyN2RmYjQJGMWy: 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:32.172 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe6144 3 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.172 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:32.740 00:17:32.740 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:32.740 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:32.740 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:32.740 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:32.740 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:32.740 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:32.740 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:32.740 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:32.740 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:32.740 { 00:17:32.740 "cntlid": 87, 00:17:32.740 "qid": 0, 00:17:32.740 "state": "enabled", 00:17:32.740 "thread": "nvmf_tgt_poll_group_000", 00:17:32.740 "listen_address": { 00:17:32.740 "trtype": "TCP", 00:17:32.740 "adrfam": "IPv4", 00:17:32.740 "traddr": "10.0.0.2", 00:17:32.740 "trsvcid": "4420" 00:17:32.740 }, 00:17:32.740 "peer_address": { 00:17:32.740 "trtype": "TCP", 00:17:32.740 "adrfam": "IPv4", 00:17:32.740 "traddr": "10.0.0.1", 00:17:32.740 "trsvcid": "51642" 00:17:32.740 }, 00:17:32.740 "auth": { 00:17:32.740 "state": "completed", 00:17:32.740 "digest": "sha384", 00:17:32.740 "dhgroup": "ffdhe6144" 00:17:32.740 } 00:17:32.740 } 00:17:32.740 ]' 00:17:32.740 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:32.740 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:32.740 12:03:19 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:32.999 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:17:32.999 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:32.999 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:32.999 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:32.999 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:33.258 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:17:33.517 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:33.517 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:33.517 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:33.517 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.517 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.777 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.777 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:33.777 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:33.777 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.777 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:33.777 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 0 00:17:33.777 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:33.777 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:33.777 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:33.777 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:33.777 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:33.777 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.777 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:33.777 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:33.777 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:33.777 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:33.777 12:03:20 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:34.345 00:17:34.345 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:34.345 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:34.345 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:34.605 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:34.605 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:34.605 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:34.605 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:34.605 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:34.605 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:34.605 { 00:17:34.605 "cntlid": 89, 00:17:34.605 "qid": 0, 00:17:34.605 "state": "enabled", 00:17:34.605 "thread": "nvmf_tgt_poll_group_000", 00:17:34.605 "listen_address": { 00:17:34.605 "trtype": "TCP", 00:17:34.605 "adrfam": "IPv4", 00:17:34.605 "traddr": "10.0.0.2", 00:17:34.605 "trsvcid": "4420" 00:17:34.605 }, 00:17:34.605 "peer_address": { 00:17:34.605 "trtype": "TCP", 00:17:34.605 "adrfam": "IPv4", 00:17:34.605 "traddr": "10.0.0.1", 00:17:34.605 "trsvcid": "51682" 00:17:34.605 }, 00:17:34.605 "auth": { 00:17:34.605 "state": "completed", 00:17:34.605 "digest": "sha384", 00:17:34.605 "dhgroup": "ffdhe8192" 00:17:34.605 } 00:17:34.605 } 00:17:34.605 ]' 00:17:34.605 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:34.605 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:34.605 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:34.605 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:34.605 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:34.605 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:34.605 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:34.605 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:34.864 12:03:21 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:17:35.432 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:35.432 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 1 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:35.433 12:03:22 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:36.001 00:17:36.001 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:36.001 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:36.001 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:36.259 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:36.259 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:36.259 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:36.259 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:36.259 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:36.259 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:36.259 { 00:17:36.259 "cntlid": 91, 00:17:36.259 "qid": 0, 00:17:36.259 "state": "enabled", 00:17:36.259 "thread": "nvmf_tgt_poll_group_000", 00:17:36.259 "listen_address": { 00:17:36.259 "trtype": "TCP", 00:17:36.259 "adrfam": "IPv4", 00:17:36.259 "traddr": "10.0.0.2", 00:17:36.259 "trsvcid": "4420" 00:17:36.259 }, 00:17:36.259 "peer_address": { 00:17:36.259 "trtype": "TCP", 00:17:36.259 "adrfam": "IPv4", 00:17:36.259 "traddr": "10.0.0.1", 00:17:36.259 "trsvcid": "51692" 00:17:36.259 }, 00:17:36.259 "auth": { 00:17:36.259 "state": "completed", 00:17:36.259 "digest": "sha384", 00:17:36.259 "dhgroup": "ffdhe8192" 00:17:36.259 } 00:17:36.259 } 00:17:36.259 ]' 00:17:36.259 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:36.259 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:36.259 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:36.259 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:36.259 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:36.259 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:36.259 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:36.259 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:36.517 12:03:23 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjY5NmY5NDUxZThjYThlODgwZDhmZTUxMjVmZmEyMjLroygh: --dhchap-ctrl-secret DHHC-1:02:ZGU1MTRjMTRiYjY4NjFmNzI0MGQ0OWFiODg2ZTVhMWQyMzQ1MDRlM2VlYjBkZWQ0CdORWg==: 00:17:37.084 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:37.084 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:37.084 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:37.084 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.084 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.084 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.084 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:37.084 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.084 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:37.343 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 2 00:17:37.343 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:37.343 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:37.343 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:37.343 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:37.343 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:37.343 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.343 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.343 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.343 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.343 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.343 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:37.602 00:17:37.860 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:37.860 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:37.860 12:03:24 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:37.860 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:37.860 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:37.860 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:37.860 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:37.860 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:37.860 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:37.860 { 00:17:37.860 "cntlid": 93, 00:17:37.860 "qid": 0, 00:17:37.860 "state": "enabled", 00:17:37.860 "thread": "nvmf_tgt_poll_group_000", 00:17:37.860 "listen_address": { 00:17:37.860 "trtype": "TCP", 00:17:37.860 "adrfam": "IPv4", 00:17:37.860 "traddr": "10.0.0.2", 00:17:37.860 "trsvcid": "4420" 00:17:37.860 }, 00:17:37.860 "peer_address": { 00:17:37.860 "trtype": "TCP", 00:17:37.860 "adrfam": "IPv4", 00:17:37.860 "traddr": "10.0.0.1", 00:17:37.860 "trsvcid": "51726" 00:17:37.860 }, 00:17:37.860 "auth": { 00:17:37.860 "state": "completed", 00:17:37.860 "digest": "sha384", 00:17:37.860 "dhgroup": "ffdhe8192" 00:17:37.860 } 00:17:37.860 } 00:17:37.860 ]' 00:17:37.860 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:37.860 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:37.860 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:38.119 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:38.119 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:38.119 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:38.119 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:38.119 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:38.119 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTY0Mzk5YTBhN2YxODI1Y2EwNDcwNjg1Nzg4ZjE2Mjg0MGZiNzhjNTM5NTM3ZjFiPPhUJw==: --dhchap-ctrl-secret DHHC-1:01:MTcxMmViOWQ5N2UxMWEwZjRlYzhiYjVjMDMyN2RmYjQJGMWy: 00:17:38.687 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:38.687 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:38.687 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:38.687 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.687 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.688 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.688 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:38.688 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.688 12:03:25 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:17:38.947 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha384 ffdhe8192 3 00:17:38.947 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:38.947 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha384 00:17:38.947 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:17:38.947 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:38.947 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:38.947 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:38.947 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:38.947 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:38.947 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:38.947 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:38.947 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:39.515 00:17:39.515 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:39.515 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:39.515 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:39.515 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:39.515 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:39.515 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:39.515 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:39.515 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:39.515 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:39.515 { 00:17:39.515 "cntlid": 95, 00:17:39.515 "qid": 0, 00:17:39.515 "state": "enabled", 00:17:39.515 "thread": "nvmf_tgt_poll_group_000", 00:17:39.515 "listen_address": { 00:17:39.515 "trtype": "TCP", 00:17:39.515 "adrfam": "IPv4", 00:17:39.515 "traddr": "10.0.0.2", 00:17:39.515 "trsvcid": "4420" 00:17:39.515 }, 00:17:39.515 "peer_address": { 00:17:39.515 "trtype": "TCP", 00:17:39.515 "adrfam": "IPv4", 00:17:39.515 "traddr": "10.0.0.1", 00:17:39.515 "trsvcid": "51766" 00:17:39.516 }, 00:17:39.516 "auth": { 00:17:39.516 "state": "completed", 00:17:39.516 "digest": "sha384", 00:17:39.516 "dhgroup": "ffdhe8192" 00:17:39.516 } 00:17:39.516 } 00:17:39.516 ]' 00:17:39.516 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:39.775 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha384 == \s\h\a\3\8\4 ]] 00:17:39.775 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:39.775 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:17:39.775 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:39.775 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:39.775 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:39.775 12:03:26 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:40.034 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:40.603 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@91 -- # for digest in "${digests[@]}" 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 0 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.603 12:03:27 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:40.863 00:17:40.863 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:40.863 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:40.863 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:41.153 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:41.153 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:41.153 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.153 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.153 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.153 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:41.153 { 00:17:41.153 "cntlid": 97, 00:17:41.153 "qid": 0, 00:17:41.153 "state": "enabled", 00:17:41.153 "thread": "nvmf_tgt_poll_group_000", 00:17:41.153 "listen_address": { 00:17:41.153 "trtype": "TCP", 00:17:41.153 "adrfam": "IPv4", 00:17:41.153 "traddr": "10.0.0.2", 00:17:41.153 "trsvcid": "4420" 00:17:41.153 }, 00:17:41.153 "peer_address": { 00:17:41.153 "trtype": "TCP", 00:17:41.153 "adrfam": "IPv4", 00:17:41.153 "traddr": "10.0.0.1", 00:17:41.153 "trsvcid": "59076" 00:17:41.153 }, 00:17:41.153 "auth": { 00:17:41.153 "state": "completed", 00:17:41.153 "digest": "sha512", 00:17:41.153 "dhgroup": "null" 00:17:41.153 } 00:17:41.153 } 00:17:41.153 ]' 00:17:41.153 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:41.153 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:41.153 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:41.153 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:41.153 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:41.153 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:41.153 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:41.153 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:41.413 12:03:28 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:17:41.981 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:41.981 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:41.981 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:41.981 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:41.981 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:41.981 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:41.981 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:41.981 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:41.981 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:42.241 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 1 00:17:42.241 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:42.241 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:42.241 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:42.241 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:42.241 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:42.241 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.241 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.241 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.241 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.241 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.241 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:42.241 00:17:42.241 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:42.241 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:42.241 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:42.501 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:42.501 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:42.501 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:42.501 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:42.501 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:42.501 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:42.501 { 00:17:42.501 "cntlid": 99, 00:17:42.501 "qid": 0, 00:17:42.501 "state": "enabled", 00:17:42.501 "thread": "nvmf_tgt_poll_group_000", 00:17:42.501 "listen_address": { 00:17:42.501 "trtype": "TCP", 00:17:42.501 "adrfam": "IPv4", 00:17:42.501 "traddr": "10.0.0.2", 00:17:42.501 "trsvcid": "4420" 00:17:42.501 }, 00:17:42.501 "peer_address": { 00:17:42.501 "trtype": "TCP", 00:17:42.501 "adrfam": "IPv4", 00:17:42.501 "traddr": "10.0.0.1", 00:17:42.501 "trsvcid": "59104" 00:17:42.501 }, 00:17:42.501 "auth": { 00:17:42.501 "state": "completed", 00:17:42.501 "digest": "sha512", 00:17:42.501 "dhgroup": "null" 00:17:42.501 } 00:17:42.501 } 00:17:42.501 ]' 00:17:42.501 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:42.501 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:42.501 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:42.761 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:42.761 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:42.761 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:42.761 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:42.761 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:42.761 12:03:29 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjY5NmY5NDUxZThjYThlODgwZDhmZTUxMjVmZmEyMjLroygh: --dhchap-ctrl-secret DHHC-1:02:ZGU1MTRjMTRiYjY4NjFmNzI0MGQ0OWFiODg2ZTVhMWQyMzQ1MDRlM2VlYjBkZWQ0CdORWg==: 00:17:43.332 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:43.332 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:43.332 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:43.332 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.332 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.332 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.332 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:43.332 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.332 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:43.591 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 2 00:17:43.591 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:43.591 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:43.591 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:43.591 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:43.591 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:43.591 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.591 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:43.591 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:43.591 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:43.591 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.591 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:43.851 00:17:43.851 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:43.851 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:43.851 12:03:30 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:44.110 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:44.110 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:44.110 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.110 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.110 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.110 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:44.110 { 00:17:44.110 "cntlid": 101, 00:17:44.110 "qid": 0, 00:17:44.110 "state": "enabled", 00:17:44.110 "thread": "nvmf_tgt_poll_group_000", 00:17:44.110 "listen_address": { 00:17:44.110 "trtype": "TCP", 00:17:44.110 "adrfam": "IPv4", 00:17:44.110 "traddr": "10.0.0.2", 00:17:44.110 "trsvcid": "4420" 00:17:44.110 }, 00:17:44.110 "peer_address": { 00:17:44.110 "trtype": "TCP", 00:17:44.110 "adrfam": "IPv4", 00:17:44.110 "traddr": "10.0.0.1", 00:17:44.110 "trsvcid": "59116" 00:17:44.110 }, 00:17:44.110 "auth": { 00:17:44.110 "state": "completed", 00:17:44.110 "digest": "sha512", 00:17:44.110 "dhgroup": "null" 00:17:44.110 } 00:17:44.110 } 00:17:44.110 ]' 00:17:44.110 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:44.110 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:44.110 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:44.110 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:44.110 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:44.110 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:44.110 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:44.110 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:44.370 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTY0Mzk5YTBhN2YxODI1Y2EwNDcwNjg1Nzg4ZjE2Mjg0MGZiNzhjNTM5NTM3ZjFiPPhUJw==: --dhchap-ctrl-secret DHHC-1:01:MTcxMmViOWQ5N2UxMWEwZjRlYzhiYjVjMDMyN2RmYjQJGMWy: 00:17:44.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:44.939 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:44.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:44.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:44.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:44.939 12:03:31 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups null 00:17:44.939 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 null 3 00:17:44.939 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:44.939 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:44.939 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=null 00:17:44.939 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:44.939 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:44.939 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:44.939 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:44.939 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:44.939 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:44.939 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:44.939 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:45.198 00:17:45.198 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:45.198 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:45.198 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:45.458 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:45.458 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:45.458 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:45.458 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:45.458 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:45.458 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:45.458 { 00:17:45.458 "cntlid": 103, 00:17:45.458 "qid": 0, 00:17:45.458 "state": "enabled", 00:17:45.458 "thread": "nvmf_tgt_poll_group_000", 00:17:45.458 "listen_address": { 00:17:45.458 "trtype": "TCP", 00:17:45.458 "adrfam": "IPv4", 00:17:45.458 "traddr": "10.0.0.2", 00:17:45.458 "trsvcid": "4420" 00:17:45.458 }, 00:17:45.458 "peer_address": { 00:17:45.458 "trtype": "TCP", 00:17:45.458 "adrfam": "IPv4", 00:17:45.458 "traddr": "10.0.0.1", 00:17:45.458 "trsvcid": "59134" 00:17:45.458 }, 00:17:45.458 "auth": { 00:17:45.458 "state": "completed", 00:17:45.458 "digest": "sha512", 00:17:45.458 "dhgroup": "null" 00:17:45.458 } 00:17:45.458 } 00:17:45.458 ]' 00:17:45.458 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:45.458 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:45.458 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:45.458 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ null == \n\u\l\l ]] 00:17:45.458 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:45.717 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:45.717 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:45.717 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:45.717 12:03:32 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:17:46.285 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:46.285 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:46.285 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:46.285 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.285 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.285 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.285 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:46.285 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:46.285 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.285 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:46.545 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 0 00:17:46.545 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:46.545 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:46.545 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:46.545 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:46.545 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:46.545 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.545 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.545 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.545 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.545 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.545 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:46.805 00:17:46.805 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:46.805 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:46.805 12:03:33 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:46.805 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:46.805 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:46.805 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:46.805 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:46.805 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:46.805 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:46.805 { 00:17:46.805 "cntlid": 105, 00:17:46.805 "qid": 0, 00:17:46.805 "state": "enabled", 00:17:46.805 "thread": "nvmf_tgt_poll_group_000", 00:17:46.805 "listen_address": { 00:17:46.805 "trtype": "TCP", 00:17:46.805 "adrfam": "IPv4", 00:17:46.805 "traddr": "10.0.0.2", 00:17:46.805 "trsvcid": "4420" 00:17:46.805 }, 00:17:46.805 "peer_address": { 00:17:46.805 "trtype": "TCP", 00:17:46.805 "adrfam": "IPv4", 00:17:46.805 "traddr": "10.0.0.1", 00:17:46.805 "trsvcid": "59176" 00:17:46.805 }, 00:17:46.805 "auth": { 00:17:46.805 "state": "completed", 00:17:46.805 "digest": "sha512", 00:17:46.805 "dhgroup": "ffdhe2048" 00:17:46.805 } 00:17:46.805 } 00:17:46.805 ]' 00:17:46.805 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:47.064 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:47.064 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:47.064 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:47.064 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:47.064 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:47.064 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:47.064 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:47.323 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:17:47.890 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:47.890 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:47.890 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:47.890 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.890 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.890 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.890 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:47.890 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.890 12:03:34 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:47.890 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 1 00:17:47.890 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:47.890 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:47.890 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:47.890 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:47.891 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:47.891 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.891 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:47.891 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:47.891 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:47.891 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:47.891 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:48.150 00:17:48.150 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:48.150 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:48.150 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:48.410 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:48.410 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:48.410 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:48.410 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:48.410 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:48.410 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:48.410 { 00:17:48.410 "cntlid": 107, 00:17:48.410 "qid": 0, 00:17:48.410 "state": "enabled", 00:17:48.410 "thread": "nvmf_tgt_poll_group_000", 00:17:48.410 "listen_address": { 00:17:48.410 "trtype": "TCP", 00:17:48.410 "adrfam": "IPv4", 00:17:48.410 "traddr": "10.0.0.2", 00:17:48.410 "trsvcid": "4420" 00:17:48.410 }, 00:17:48.410 "peer_address": { 00:17:48.410 "trtype": "TCP", 00:17:48.410 "adrfam": "IPv4", 00:17:48.410 "traddr": "10.0.0.1", 00:17:48.410 "trsvcid": "59210" 00:17:48.410 }, 00:17:48.410 "auth": { 00:17:48.410 "state": "completed", 00:17:48.410 "digest": "sha512", 00:17:48.410 "dhgroup": "ffdhe2048" 00:17:48.410 } 00:17:48.410 } 00:17:48.410 ]' 00:17:48.410 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:48.410 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:48.410 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:48.410 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:48.410 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:48.410 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:48.410 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:48.410 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:48.669 12:03:35 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjY5NmY5NDUxZThjYThlODgwZDhmZTUxMjVmZmEyMjLroygh: --dhchap-ctrl-secret DHHC-1:02:ZGU1MTRjMTRiYjY4NjFmNzI0MGQ0OWFiODg2ZTVhMWQyMzQ1MDRlM2VlYjBkZWQ0CdORWg==: 00:17:49.237 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:49.237 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:49.237 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:49.237 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.237 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.237 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.237 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:49.237 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.237 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:49.495 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 2 00:17:49.495 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:49.495 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:49.495 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:49.495 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:49.495 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:49.495 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.495 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.495 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.495 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.495 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.495 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:49.495 00:17:49.754 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:49.754 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:49.754 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:49.754 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:49.754 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:49.754 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:49.754 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:49.754 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:49.754 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:49.754 { 00:17:49.754 "cntlid": 109, 00:17:49.754 "qid": 0, 00:17:49.754 "state": "enabled", 00:17:49.754 "thread": "nvmf_tgt_poll_group_000", 00:17:49.754 "listen_address": { 00:17:49.754 "trtype": "TCP", 00:17:49.754 "adrfam": "IPv4", 00:17:49.754 "traddr": "10.0.0.2", 00:17:49.754 "trsvcid": "4420" 00:17:49.754 }, 00:17:49.754 "peer_address": { 00:17:49.754 "trtype": "TCP", 00:17:49.754 "adrfam": "IPv4", 00:17:49.754 "traddr": "10.0.0.1", 00:17:49.754 "trsvcid": "45064" 00:17:49.754 }, 00:17:49.754 "auth": { 00:17:49.754 "state": "completed", 00:17:49.754 "digest": "sha512", 00:17:49.754 "dhgroup": "ffdhe2048" 00:17:49.754 } 00:17:49.754 } 00:17:49.754 ]' 00:17:49.754 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:49.754 12:03:36 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:49.754 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:50.013 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:50.013 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:50.013 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:50.013 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:50.013 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:50.272 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTY0Mzk5YTBhN2YxODI1Y2EwNDcwNjg1Nzg4ZjE2Mjg0MGZiNzhjNTM5NTM3ZjFiPPhUJw==: --dhchap-ctrl-secret DHHC-1:01:MTcxMmViOWQ5N2UxMWEwZjRlYzhiYjVjMDMyN2RmYjQJGMWy: 00:17:50.530 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:50.790 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe2048 3 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe2048 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:50.790 12:03:37 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:51.049 00:17:51.049 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:51.049 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:51.049 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:51.309 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:51.309 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:51.309 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:51.309 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:51.309 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:51.309 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:51.309 { 00:17:51.309 "cntlid": 111, 00:17:51.309 "qid": 0, 00:17:51.309 "state": "enabled", 00:17:51.309 "thread": "nvmf_tgt_poll_group_000", 00:17:51.309 "listen_address": { 00:17:51.309 "trtype": "TCP", 00:17:51.309 "adrfam": "IPv4", 00:17:51.309 "traddr": "10.0.0.2", 00:17:51.309 "trsvcid": "4420" 00:17:51.309 }, 00:17:51.309 "peer_address": { 00:17:51.309 "trtype": "TCP", 00:17:51.309 "adrfam": "IPv4", 00:17:51.309 "traddr": "10.0.0.1", 00:17:51.309 "trsvcid": "45084" 00:17:51.309 }, 00:17:51.309 "auth": { 00:17:51.309 "state": "completed", 00:17:51.309 "digest": "sha512", 00:17:51.309 "dhgroup": "ffdhe2048" 00:17:51.309 } 00:17:51.309 } 00:17:51.309 ]' 00:17:51.309 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:51.309 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:51.309 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:51.309 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe2048 == \f\f\d\h\e\2\0\4\8 ]] 00:17:51.309 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:51.309 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:51.309 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:51.309 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:51.568 12:03:38 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:17:52.136 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:52.136 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:52.136 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:52.136 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.136 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.136 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.136 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:52.136 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:52.136 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:52.136 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:52.395 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 0 00:17:52.395 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:52.395 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:52.395 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:52.395 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:52.395 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:52.395 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.395 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.395 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.395 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.395 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.395 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:52.655 00:17:52.655 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:52.655 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:52.655 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:52.655 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:52.655 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:52.655 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:52.655 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:52.655 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:52.655 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:52.655 { 00:17:52.655 "cntlid": 113, 00:17:52.655 "qid": 0, 00:17:52.655 "state": "enabled", 00:17:52.655 "thread": "nvmf_tgt_poll_group_000", 00:17:52.655 "listen_address": { 00:17:52.655 "trtype": "TCP", 00:17:52.655 "adrfam": "IPv4", 00:17:52.655 "traddr": "10.0.0.2", 00:17:52.655 "trsvcid": "4420" 00:17:52.655 }, 00:17:52.655 "peer_address": { 00:17:52.655 "trtype": "TCP", 00:17:52.655 "adrfam": "IPv4", 00:17:52.655 "traddr": "10.0.0.1", 00:17:52.655 "trsvcid": "45116" 00:17:52.655 }, 00:17:52.655 "auth": { 00:17:52.655 "state": "completed", 00:17:52.655 "digest": "sha512", 00:17:52.655 "dhgroup": "ffdhe3072" 00:17:52.655 } 00:17:52.655 } 00:17:52.655 ]' 00:17:52.655 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:52.916 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:52.916 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:52.916 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:52.916 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:52.916 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:52.916 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:52.916 12:03:39 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:53.175 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:53.745 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 1 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:53.745 12:03:40 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:54.004 00:17:54.004 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:54.004 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:54.004 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:54.264 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:54.264 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:54.264 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:54.264 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:54.264 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:54.264 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:54.264 { 00:17:54.264 "cntlid": 115, 00:17:54.264 "qid": 0, 00:17:54.264 "state": "enabled", 00:17:54.264 "thread": "nvmf_tgt_poll_group_000", 00:17:54.264 "listen_address": { 00:17:54.264 "trtype": "TCP", 00:17:54.264 "adrfam": "IPv4", 00:17:54.264 "traddr": "10.0.0.2", 00:17:54.264 "trsvcid": "4420" 00:17:54.264 }, 00:17:54.264 "peer_address": { 00:17:54.264 "trtype": "TCP", 00:17:54.264 "adrfam": "IPv4", 00:17:54.264 "traddr": "10.0.0.1", 00:17:54.264 "trsvcid": "45146" 00:17:54.264 }, 00:17:54.264 "auth": { 00:17:54.264 "state": "completed", 00:17:54.264 "digest": "sha512", 00:17:54.264 "dhgroup": "ffdhe3072" 00:17:54.264 } 00:17:54.264 } 00:17:54.264 ]' 00:17:54.264 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:54.264 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:54.264 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:54.264 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:54.264 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:54.264 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:54.264 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:54.264 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:54.524 12:03:41 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjY5NmY5NDUxZThjYThlODgwZDhmZTUxMjVmZmEyMjLroygh: --dhchap-ctrl-secret DHHC-1:02:ZGU1MTRjMTRiYjY4NjFmNzI0MGQ0OWFiODg2ZTVhMWQyMzQ1MDRlM2VlYjBkZWQ0CdORWg==: 00:17:55.094 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:55.094 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:55.094 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:55.094 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.094 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.094 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.094 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:55.094 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:55.094 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:55.388 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 2 00:17:55.388 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:55.388 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:55.388 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:55.388 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:17:55.388 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:55.388 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.388 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.388 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.388 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.388 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.388 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:17:55.388 00:17:55.648 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:55.648 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:55.648 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:55.648 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:55.648 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:55.648 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:55.648 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:55.648 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:55.648 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:55.648 { 00:17:55.648 "cntlid": 117, 00:17:55.648 "qid": 0, 00:17:55.648 "state": "enabled", 00:17:55.648 "thread": "nvmf_tgt_poll_group_000", 00:17:55.648 "listen_address": { 00:17:55.648 "trtype": "TCP", 00:17:55.648 "adrfam": "IPv4", 00:17:55.648 "traddr": "10.0.0.2", 00:17:55.648 "trsvcid": "4420" 00:17:55.648 }, 00:17:55.648 "peer_address": { 00:17:55.648 "trtype": "TCP", 00:17:55.648 "adrfam": "IPv4", 00:17:55.648 "traddr": "10.0.0.1", 00:17:55.648 "trsvcid": "45182" 00:17:55.648 }, 00:17:55.648 "auth": { 00:17:55.648 "state": "completed", 00:17:55.648 "digest": "sha512", 00:17:55.648 "dhgroup": "ffdhe3072" 00:17:55.648 } 00:17:55.648 } 00:17:55.648 ]' 00:17:55.648 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:55.648 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:55.648 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:55.907 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:55.907 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:55.907 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:55.907 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:55.907 12:03:42 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:55.907 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTY0Mzk5YTBhN2YxODI1Y2EwNDcwNjg1Nzg4ZjE2Mjg0MGZiNzhjNTM5NTM3ZjFiPPhUJw==: --dhchap-ctrl-secret DHHC-1:01:MTcxMmViOWQ5N2UxMWEwZjRlYzhiYjVjMDMyN2RmYjQJGMWy: 00:17:56.476 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:56.476 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:56.476 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:56.476 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.476 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.476 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.476 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:56.476 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.476 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:17:56.735 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe3072 3 00:17:56.735 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:56.735 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:56.735 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe3072 00:17:56.735 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:17:56.735 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:56.735 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:17:56.735 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:56.735 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:56.735 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:56.735 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.735 12:03:43 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:17:56.994 00:17:56.994 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:56.994 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:56.994 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:57.253 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:57.253 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:57.253 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:57.253 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:57.253 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:57.253 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:57.253 { 00:17:57.253 "cntlid": 119, 00:17:57.253 "qid": 0, 00:17:57.253 "state": "enabled", 00:17:57.253 "thread": "nvmf_tgt_poll_group_000", 00:17:57.253 "listen_address": { 00:17:57.253 "trtype": "TCP", 00:17:57.254 "adrfam": "IPv4", 00:17:57.254 "traddr": "10.0.0.2", 00:17:57.254 "trsvcid": "4420" 00:17:57.254 }, 00:17:57.254 "peer_address": { 00:17:57.254 "trtype": "TCP", 00:17:57.254 "adrfam": "IPv4", 00:17:57.254 "traddr": "10.0.0.1", 00:17:57.254 "trsvcid": "45218" 00:17:57.254 }, 00:17:57.254 "auth": { 00:17:57.254 "state": "completed", 00:17:57.254 "digest": "sha512", 00:17:57.254 "dhgroup": "ffdhe3072" 00:17:57.254 } 00:17:57.254 } 00:17:57.254 ]' 00:17:57.254 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:57.254 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:57.254 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:57.254 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe3072 == \f\f\d\h\e\3\0\7\2 ]] 00:17:57.254 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:57.254 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:57.254 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:57.254 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:57.513 12:03:44 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:17:58.081 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:58.081 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:58.081 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:58.081 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.081 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.081 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.081 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:17:58.081 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:58.081 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:58.081 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:58.341 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 0 00:17:58.341 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:58.341 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:58.341 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:58.341 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:17:58.341 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:58.341 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.341 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.341 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.341 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.341 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.341 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:17:58.601 00:17:58.601 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:58.601 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:58.601 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:17:58.601 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:17:58.601 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:17:58.601 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:58.601 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:58.601 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:58.601 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:17:58.601 { 00:17:58.601 "cntlid": 121, 00:17:58.601 "qid": 0, 00:17:58.601 "state": "enabled", 00:17:58.601 "thread": "nvmf_tgt_poll_group_000", 00:17:58.601 "listen_address": { 00:17:58.601 "trtype": "TCP", 00:17:58.601 "adrfam": "IPv4", 00:17:58.601 "traddr": "10.0.0.2", 00:17:58.601 "trsvcid": "4420" 00:17:58.601 }, 00:17:58.601 "peer_address": { 00:17:58.601 "trtype": "TCP", 00:17:58.601 "adrfam": "IPv4", 00:17:58.601 "traddr": "10.0.0.1", 00:17:58.601 "trsvcid": "45240" 00:17:58.601 }, 00:17:58.601 "auth": { 00:17:58.601 "state": "completed", 00:17:58.601 "digest": "sha512", 00:17:58.601 "dhgroup": "ffdhe4096" 00:17:58.601 } 00:17:58.601 } 00:17:58.601 ]' 00:17:58.601 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:17:58.861 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:17:58.861 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:17:58.861 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:17:58.861 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:17:58.861 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:17:58.861 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:17:58.861 12:03:45 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:17:59.121 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:17:59.691 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 1 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.691 12:03:46 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:17:59.951 00:17:59.951 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:17:59.951 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:17:59.951 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:00.210 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:00.210 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:00.210 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:00.210 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:00.210 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:00.210 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:00.210 { 00:18:00.210 "cntlid": 123, 00:18:00.210 "qid": 0, 00:18:00.210 "state": "enabled", 00:18:00.210 "thread": "nvmf_tgt_poll_group_000", 00:18:00.210 "listen_address": { 00:18:00.210 "trtype": "TCP", 00:18:00.210 "adrfam": "IPv4", 00:18:00.210 "traddr": "10.0.0.2", 00:18:00.210 "trsvcid": "4420" 00:18:00.210 }, 00:18:00.210 "peer_address": { 00:18:00.210 "trtype": "TCP", 00:18:00.210 "adrfam": "IPv4", 00:18:00.210 "traddr": "10.0.0.1", 00:18:00.210 "trsvcid": "33260" 00:18:00.210 }, 00:18:00.210 "auth": { 00:18:00.210 "state": "completed", 00:18:00.210 "digest": "sha512", 00:18:00.210 "dhgroup": "ffdhe4096" 00:18:00.210 } 00:18:00.210 } 00:18:00.210 ]' 00:18:00.210 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:00.210 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:00.210 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:00.210 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:00.210 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:00.470 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:00.470 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:00.470 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:00.470 12:03:47 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjY5NmY5NDUxZThjYThlODgwZDhmZTUxMjVmZmEyMjLroygh: --dhchap-ctrl-secret DHHC-1:02:ZGU1MTRjMTRiYjY4NjFmNzI0MGQ0OWFiODg2ZTVhMWQyMzQ1MDRlM2VlYjBkZWQ0CdORWg==: 00:18:01.040 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:01.040 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:01.040 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:01.040 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.040 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.040 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.040 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:01.040 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.040 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:01.299 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 2 00:18:01.299 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:01.299 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:01.299 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:01.299 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:01.299 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:01.299 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.299 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.299 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.300 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.300 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.300 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:01.559 00:18:01.559 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:01.559 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:01.559 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:01.818 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:01.818 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:01.818 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:01.818 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:01.818 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:01.818 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:01.818 { 00:18:01.818 "cntlid": 125, 00:18:01.818 "qid": 0, 00:18:01.818 "state": "enabled", 00:18:01.818 "thread": "nvmf_tgt_poll_group_000", 00:18:01.818 "listen_address": { 00:18:01.818 "trtype": "TCP", 00:18:01.818 "adrfam": "IPv4", 00:18:01.818 "traddr": "10.0.0.2", 00:18:01.818 "trsvcid": "4420" 00:18:01.818 }, 00:18:01.818 "peer_address": { 00:18:01.818 "trtype": "TCP", 00:18:01.818 "adrfam": "IPv4", 00:18:01.818 "traddr": "10.0.0.1", 00:18:01.818 "trsvcid": "33290" 00:18:01.818 }, 00:18:01.818 "auth": { 00:18:01.818 "state": "completed", 00:18:01.818 "digest": "sha512", 00:18:01.818 "dhgroup": "ffdhe4096" 00:18:01.818 } 00:18:01.818 } 00:18:01.818 ]' 00:18:01.818 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:01.818 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:01.818 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:01.818 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:01.818 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:01.818 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:01.818 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:01.818 12:03:48 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:02.078 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTY0Mzk5YTBhN2YxODI1Y2EwNDcwNjg1Nzg4ZjE2Mjg0MGZiNzhjNTM5NTM3ZjFiPPhUJw==: --dhchap-ctrl-secret DHHC-1:01:MTcxMmViOWQ5N2UxMWEwZjRlYzhiYjVjMDMyN2RmYjQJGMWy: 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:02.647 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe4096 3 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe4096 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.647 12:03:49 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:02.906 00:18:02.906 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:02.906 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:02.906 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:03.166 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:03.166 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:03.166 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.166 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.166 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.166 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:03.166 { 00:18:03.166 "cntlid": 127, 00:18:03.166 "qid": 0, 00:18:03.166 "state": "enabled", 00:18:03.166 "thread": "nvmf_tgt_poll_group_000", 00:18:03.166 "listen_address": { 00:18:03.166 "trtype": "TCP", 00:18:03.166 "adrfam": "IPv4", 00:18:03.166 "traddr": "10.0.0.2", 00:18:03.166 "trsvcid": "4420" 00:18:03.166 }, 00:18:03.166 "peer_address": { 00:18:03.166 "trtype": "TCP", 00:18:03.166 "adrfam": "IPv4", 00:18:03.166 "traddr": "10.0.0.1", 00:18:03.166 "trsvcid": "33304" 00:18:03.166 }, 00:18:03.166 "auth": { 00:18:03.166 "state": "completed", 00:18:03.166 "digest": "sha512", 00:18:03.166 "dhgroup": "ffdhe4096" 00:18:03.166 } 00:18:03.166 } 00:18:03.166 ]' 00:18:03.166 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:03.166 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:03.166 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:03.166 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe4096 == \f\f\d\h\e\4\0\9\6 ]] 00:18:03.166 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:03.427 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:03.427 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:03.427 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:03.427 12:03:50 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:18:03.996 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:03.996 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:03.996 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:03.996 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:03.996 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:03.996 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:03.996 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:03.996 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:03.996 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:03.996 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:04.256 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 0 00:18:04.256 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:04.256 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:04.256 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:04.256 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:04.256 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:04.256 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.256 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.256 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.256 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.256 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.257 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:04.516 00:18:04.516 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:04.516 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:04.516 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:04.775 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:04.775 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:04.775 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:04.775 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:04.775 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:04.775 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:04.775 { 00:18:04.775 "cntlid": 129, 00:18:04.775 "qid": 0, 00:18:04.775 "state": "enabled", 00:18:04.775 "thread": "nvmf_tgt_poll_group_000", 00:18:04.775 "listen_address": { 00:18:04.775 "trtype": "TCP", 00:18:04.775 "adrfam": "IPv4", 00:18:04.775 "traddr": "10.0.0.2", 00:18:04.775 "trsvcid": "4420" 00:18:04.775 }, 00:18:04.775 "peer_address": { 00:18:04.775 "trtype": "TCP", 00:18:04.775 "adrfam": "IPv4", 00:18:04.775 "traddr": "10.0.0.1", 00:18:04.775 "trsvcid": "33330" 00:18:04.775 }, 00:18:04.775 "auth": { 00:18:04.775 "state": "completed", 00:18:04.775 "digest": "sha512", 00:18:04.775 "dhgroup": "ffdhe6144" 00:18:04.775 } 00:18:04.775 } 00:18:04.775 ]' 00:18:04.775 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:04.775 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:04.775 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:04.775 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:04.775 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:04.775 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:04.775 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:04.775 12:03:51 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:05.035 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:18:05.615 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:05.615 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:05.615 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:05.615 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.615 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.615 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.615 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:05.615 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.615 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:05.877 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 1 00:18:05.877 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:05.877 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:05.877 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:05.877 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:05.877 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:05.877 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.877 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:05.877 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:05.877 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:05.877 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:05.877 12:03:52 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:06.136 00:18:06.136 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:06.136 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:06.137 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:06.396 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:06.396 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:06.396 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:06.396 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:06.396 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:06.396 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:06.396 { 00:18:06.396 "cntlid": 131, 00:18:06.396 "qid": 0, 00:18:06.396 "state": "enabled", 00:18:06.396 "thread": "nvmf_tgt_poll_group_000", 00:18:06.396 "listen_address": { 00:18:06.396 "trtype": "TCP", 00:18:06.396 "adrfam": "IPv4", 00:18:06.396 "traddr": "10.0.0.2", 00:18:06.396 "trsvcid": "4420" 00:18:06.396 }, 00:18:06.396 "peer_address": { 00:18:06.396 "trtype": "TCP", 00:18:06.396 "adrfam": "IPv4", 00:18:06.396 "traddr": "10.0.0.1", 00:18:06.396 "trsvcid": "33358" 00:18:06.396 }, 00:18:06.396 "auth": { 00:18:06.396 "state": "completed", 00:18:06.396 "digest": "sha512", 00:18:06.396 "dhgroup": "ffdhe6144" 00:18:06.396 } 00:18:06.396 } 00:18:06.396 ]' 00:18:06.396 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:06.396 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:06.396 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:06.396 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:06.396 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:06.396 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:06.396 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:06.396 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:06.654 12:03:53 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjY5NmY5NDUxZThjYThlODgwZDhmZTUxMjVmZmEyMjLroygh: --dhchap-ctrl-secret DHHC-1:02:ZGU1MTRjMTRiYjY4NjFmNzI0MGQ0OWFiODg2ZTVhMWQyMzQ1MDRlM2VlYjBkZWQ0CdORWg==: 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:07.225 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 2 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.225 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:07.795 00:18:07.795 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:07.795 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:07.795 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:07.795 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:07.795 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:07.795 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:07.795 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:07.795 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:07.795 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:07.795 { 00:18:07.795 "cntlid": 133, 00:18:07.795 "qid": 0, 00:18:07.795 "state": "enabled", 00:18:07.795 "thread": "nvmf_tgt_poll_group_000", 00:18:07.795 "listen_address": { 00:18:07.795 "trtype": "TCP", 00:18:07.795 "adrfam": "IPv4", 00:18:07.795 "traddr": "10.0.0.2", 00:18:07.795 "trsvcid": "4420" 00:18:07.795 }, 00:18:07.795 "peer_address": { 00:18:07.795 "trtype": "TCP", 00:18:07.795 "adrfam": "IPv4", 00:18:07.795 "traddr": "10.0.0.1", 00:18:07.795 "trsvcid": "33386" 00:18:07.795 }, 00:18:07.795 "auth": { 00:18:07.795 "state": "completed", 00:18:07.795 "digest": "sha512", 00:18:07.795 "dhgroup": "ffdhe6144" 00:18:07.795 } 00:18:07.795 } 00:18:07.795 ]' 00:18:07.795 12:03:54 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:07.795 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:07.795 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:07.795 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:07.795 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:08.054 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:08.054 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:08.054 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:08.054 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTY0Mzk5YTBhN2YxODI1Y2EwNDcwNjg1Nzg4ZjE2Mjg0MGZiNzhjNTM5NTM3ZjFiPPhUJw==: --dhchap-ctrl-secret DHHC-1:01:MTcxMmViOWQ5N2UxMWEwZjRlYzhiYjVjMDMyN2RmYjQJGMWy: 00:18:08.623 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:08.623 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:08.623 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:08.623 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.623 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.623 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.623 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:08.623 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.623 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:18:08.883 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe6144 3 00:18:08.883 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:08.883 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:08.883 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe6144 00:18:08.883 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:08.883 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:08.883 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:08.883 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:08.883 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:08.883 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:08.883 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:08.883 12:03:55 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:09.143 00:18:09.143 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:09.143 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:09.143 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:09.402 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:09.402 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:09.402 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:09.402 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:09.402 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:09.402 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:09.402 { 00:18:09.402 "cntlid": 135, 00:18:09.402 "qid": 0, 00:18:09.402 "state": "enabled", 00:18:09.402 "thread": "nvmf_tgt_poll_group_000", 00:18:09.402 "listen_address": { 00:18:09.402 "trtype": "TCP", 00:18:09.402 "adrfam": "IPv4", 00:18:09.402 "traddr": "10.0.0.2", 00:18:09.402 "trsvcid": "4420" 00:18:09.402 }, 00:18:09.402 "peer_address": { 00:18:09.402 "trtype": "TCP", 00:18:09.402 "adrfam": "IPv4", 00:18:09.402 "traddr": "10.0.0.1", 00:18:09.402 "trsvcid": "33398" 00:18:09.403 }, 00:18:09.403 "auth": { 00:18:09.403 "state": "completed", 00:18:09.403 "digest": "sha512", 00:18:09.403 "dhgroup": "ffdhe6144" 00:18:09.403 } 00:18:09.403 } 00:18:09.403 ]' 00:18:09.403 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:09.403 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:09.403 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:09.403 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe6144 == \f\f\d\h\e\6\1\4\4 ]] 00:18:09.403 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:09.403 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:09.403 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:09.403 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:09.755 12:03:56 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:10.322 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@92 -- # for dhgroup in "${dhgroups[@]}" 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 0 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.322 12:03:57 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:10.890 00:18:10.890 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:10.890 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:10.890 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:11.150 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:11.150 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:11.150 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.150 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.150 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.150 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:11.150 { 00:18:11.150 "cntlid": 137, 00:18:11.150 "qid": 0, 00:18:11.150 "state": "enabled", 00:18:11.150 "thread": "nvmf_tgt_poll_group_000", 00:18:11.150 "listen_address": { 00:18:11.150 "trtype": "TCP", 00:18:11.150 "adrfam": "IPv4", 00:18:11.150 "traddr": "10.0.0.2", 00:18:11.150 "trsvcid": "4420" 00:18:11.150 }, 00:18:11.150 "peer_address": { 00:18:11.150 "trtype": "TCP", 00:18:11.150 "adrfam": "IPv4", 00:18:11.150 "traddr": "10.0.0.1", 00:18:11.150 "trsvcid": "49144" 00:18:11.150 }, 00:18:11.150 "auth": { 00:18:11.150 "state": "completed", 00:18:11.150 "digest": "sha512", 00:18:11.150 "dhgroup": "ffdhe8192" 00:18:11.150 } 00:18:11.150 } 00:18:11.150 ]' 00:18:11.150 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:11.150 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:11.150 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:11.150 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:11.150 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:11.150 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:11.150 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:11.150 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:11.409 12:03:58 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:18:11.977 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:11.977 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:11.977 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:11.977 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:11.977 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:11.977 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:11.977 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:11.977 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:11.977 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:12.236 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 1 00:18:12.236 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:12.236 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:12.236 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:12.236 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key1 00:18:12.236 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:12.236 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.236 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.236 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.236 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.236 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.236 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:12.805 00:18:12.805 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:12.805 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:12.805 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:12.805 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:12.805 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:12.805 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:12.805 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:12.805 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:12.805 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:12.805 { 00:18:12.805 "cntlid": 139, 00:18:12.805 "qid": 0, 00:18:12.805 "state": "enabled", 00:18:12.805 "thread": "nvmf_tgt_poll_group_000", 00:18:12.805 "listen_address": { 00:18:12.805 "trtype": "TCP", 00:18:12.805 "adrfam": "IPv4", 00:18:12.805 "traddr": "10.0.0.2", 00:18:12.805 "trsvcid": "4420" 00:18:12.805 }, 00:18:12.805 "peer_address": { 00:18:12.805 "trtype": "TCP", 00:18:12.805 "adrfam": "IPv4", 00:18:12.805 "traddr": "10.0.0.1", 00:18:12.805 "trsvcid": "49172" 00:18:12.805 }, 00:18:12.805 "auth": { 00:18:12.805 "state": "completed", 00:18:12.805 "digest": "sha512", 00:18:12.805 "dhgroup": "ffdhe8192" 00:18:12.805 } 00:18:12.805 } 00:18:12.805 ]' 00:18:12.805 12:03:59 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:12.805 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:12.805 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:12.805 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:12.805 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:13.065 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:13.065 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:13.065 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:13.065 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:01:MjY5NmY5NDUxZThjYThlODgwZDhmZTUxMjVmZmEyMjLroygh: --dhchap-ctrl-secret DHHC-1:02:ZGU1MTRjMTRiYjY4NjFmNzI0MGQ0OWFiODg2ZTVhMWQyMzQ1MDRlM2VlYjBkZWQ0CdORWg==: 00:18:13.632 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:13.632 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:13.632 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:13.632 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.632 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.632 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.632 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:13.632 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:13.632 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:13.891 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 2 00:18:13.891 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:13.891 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:13.891 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:13.891 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key2 00:18:13.891 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:13.891 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.891 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:13.891 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:13.891 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:13.891 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:13.891 12:04:00 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:18:14.459 00:18:14.459 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:14.459 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:14.459 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:14.459 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:14.459 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:14.459 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:14.459 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:14.459 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:14.459 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:14.459 { 00:18:14.459 "cntlid": 141, 00:18:14.459 "qid": 0, 00:18:14.459 "state": "enabled", 00:18:14.459 "thread": "nvmf_tgt_poll_group_000", 00:18:14.459 "listen_address": { 00:18:14.459 "trtype": "TCP", 00:18:14.459 "adrfam": "IPv4", 00:18:14.459 "traddr": "10.0.0.2", 00:18:14.459 "trsvcid": "4420" 00:18:14.459 }, 00:18:14.459 "peer_address": { 00:18:14.459 "trtype": "TCP", 00:18:14.459 "adrfam": "IPv4", 00:18:14.459 "traddr": "10.0.0.1", 00:18:14.459 "trsvcid": "49196" 00:18:14.459 }, 00:18:14.459 "auth": { 00:18:14.459 "state": "completed", 00:18:14.459 "digest": "sha512", 00:18:14.459 "dhgroup": "ffdhe8192" 00:18:14.459 } 00:18:14.459 } 00:18:14.459 ]' 00:18:14.459 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:14.459 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:14.459 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:14.718 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:14.718 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:14.718 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:14.718 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:14.718 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:14.976 12:04:01 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:02:MTY0Mzk5YTBhN2YxODI1Y2EwNDcwNjg1Nzg4ZjE2Mjg0MGZiNzhjNTM5NTM3ZjFiPPhUJw==: --dhchap-ctrl-secret DHHC-1:01:MTcxMmViOWQ5N2UxMWEwZjRlYzhiYjVjMDMyN2RmYjQJGMWy: 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:15.544 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@93 -- # for keyid in "${!keys[@]}" 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@94 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@96 -- # connect_authenticate sha512 ffdhe8192 3 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:15.544 12:04:02 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:16.112 00:18:16.112 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:16.112 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:16.113 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:16.113 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:16.113 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:16.113 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:16.113 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:16.113 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:16.113 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:16.113 { 00:18:16.113 "cntlid": 143, 00:18:16.113 "qid": 0, 00:18:16.113 "state": "enabled", 00:18:16.113 "thread": "nvmf_tgt_poll_group_000", 00:18:16.113 "listen_address": { 00:18:16.113 "trtype": "TCP", 00:18:16.113 "adrfam": "IPv4", 00:18:16.113 "traddr": "10.0.0.2", 00:18:16.113 "trsvcid": "4420" 00:18:16.113 }, 00:18:16.113 "peer_address": { 00:18:16.113 "trtype": "TCP", 00:18:16.113 "adrfam": "IPv4", 00:18:16.113 "traddr": "10.0.0.1", 00:18:16.113 "trsvcid": "49224" 00:18:16.113 }, 00:18:16.113 "auth": { 00:18:16.113 "state": "completed", 00:18:16.113 "digest": "sha512", 00:18:16.113 "dhgroup": "ffdhe8192" 00:18:16.113 } 00:18:16.113 } 00:18:16.113 ]' 00:18:16.370 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:16.370 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:16.370 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:16.370 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:16.370 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:16.370 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:16.370 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:16.370 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:16.628 12:04:03 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:17.198 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s sha256,sha384,sha512 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # IFS=, 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@103 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@102 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@114 -- # connect_authenticate sha512 ffdhe8192 0 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key0 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.198 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:18:17.768 00:18:17.768 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:17.768 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:17.768 12:04:04 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:18.027 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:18.027 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:18.027 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.027 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.027 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.027 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:18.027 { 00:18:18.027 "cntlid": 145, 00:18:18.027 "qid": 0, 00:18:18.027 "state": "enabled", 00:18:18.027 "thread": "nvmf_tgt_poll_group_000", 00:18:18.027 "listen_address": { 00:18:18.027 "trtype": "TCP", 00:18:18.027 "adrfam": "IPv4", 00:18:18.027 "traddr": "10.0.0.2", 00:18:18.027 "trsvcid": "4420" 00:18:18.027 }, 00:18:18.027 "peer_address": { 00:18:18.027 "trtype": "TCP", 00:18:18.027 "adrfam": "IPv4", 00:18:18.027 "traddr": "10.0.0.1", 00:18:18.027 "trsvcid": "49250" 00:18:18.027 }, 00:18:18.027 "auth": { 00:18:18.027 "state": "completed", 00:18:18.027 "digest": "sha512", 00:18:18.027 "dhgroup": "ffdhe8192" 00:18:18.027 } 00:18:18.027 } 00:18:18.027 ]' 00:18:18.027 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:18.027 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:18.027 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:18.027 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:18.027 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:18.027 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:18.027 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:18.027 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:18.286 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:00:N2UwMTk1OTIxYWQ2YmEyZGE2Njc1ZmU3N2U4NThhMjIxNWYwN2U1MjM5OWYwZTU4NWXP2Q==: --dhchap-ctrl-secret DHHC-1:03:NDg5ZGNjZTVhZjU3ZDE3Y2ZjN2UwNWRkNjQ5M2FhZGI5MzJkYzBiY2VkYjBiOTNkYjBiMzU1YWYwN2UzNzMwOAAGbQk=: 00:18:18.854 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:18.854 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:18.854 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:18.854 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.854 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.854 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.854 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@117 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:18.854 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:18.854 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:18.854 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:18.854 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@118 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:18.854 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:18.854 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:18.854 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:18.854 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:18.854 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:18.854 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:18.854 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:18.854 12:04:05 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key2 00:18:19.113 request: 00:18:19.113 { 00:18:19.113 "name": "nvme0", 00:18:19.113 "trtype": "tcp", 00:18:19.113 "traddr": "10.0.0.2", 00:18:19.113 "adrfam": "ipv4", 00:18:19.113 "trsvcid": "4420", 00:18:19.113 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:19.113 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:19.113 "prchk_reftag": false, 00:18:19.113 "prchk_guard": false, 00:18:19.113 "hdgst": false, 00:18:19.113 "ddgst": false, 00:18:19.113 "dhchap_key": "key2", 00:18:19.113 "method": "bdev_nvme_attach_controller", 00:18:19.113 "req_id": 1 00:18:19.113 } 00:18:19.113 Got JSON-RPC error response 00:18:19.113 response: 00:18:19.113 { 00:18:19.113 "code": -5, 00:18:19.113 "message": "Input/output error" 00:18:19.113 } 00:18:19.113 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@121 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@124 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@125 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.372 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:18:19.632 request: 00:18:19.632 { 00:18:19.632 "name": "nvme0", 00:18:19.632 "trtype": "tcp", 00:18:19.632 "traddr": "10.0.0.2", 00:18:19.632 "adrfam": "ipv4", 00:18:19.632 "trsvcid": "4420", 00:18:19.632 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:19.632 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:19.632 "prchk_reftag": false, 00:18:19.632 "prchk_guard": false, 00:18:19.632 "hdgst": false, 00:18:19.632 "ddgst": false, 00:18:19.632 "dhchap_key": "key1", 00:18:19.632 "dhchap_ctrlr_key": "ckey2", 00:18:19.632 "method": "bdev_nvme_attach_controller", 00:18:19.632 "req_id": 1 00:18:19.632 } 00:18:19.632 Got JSON-RPC error response 00:18:19.632 response: 00:18:19.632 { 00:18:19.632 "code": -5, 00:18:19.632 "message": "Input/output error" 00:18:19.632 } 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@128 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@131 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key1 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@132 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:19.632 12:04:06 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:18:20.200 request: 00:18:20.200 { 00:18:20.200 "name": "nvme0", 00:18:20.200 "trtype": "tcp", 00:18:20.200 "traddr": "10.0.0.2", 00:18:20.200 "adrfam": "ipv4", 00:18:20.200 "trsvcid": "4420", 00:18:20.200 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:20.200 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:20.200 "prchk_reftag": false, 00:18:20.200 "prchk_guard": false, 00:18:20.200 "hdgst": false, 00:18:20.200 "ddgst": false, 00:18:20.200 "dhchap_key": "key1", 00:18:20.200 "dhchap_ctrlr_key": "ckey1", 00:18:20.200 "method": "bdev_nvme_attach_controller", 00:18:20.200 "req_id": 1 00:18:20.200 } 00:18:20.200 Got JSON-RPC error response 00:18:20.200 response: 00:18:20.200 { 00:18:20.200 "code": -5, 00:18:20.200 "message": "Input/output error" 00:18:20.200 } 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@135 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@138 -- # killprocess 322709 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 322709 ']' 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 322709 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 322709 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 322709' 00:18:20.200 killing process with pid 322709 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 322709 00:18:20.200 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 322709 00:18:20.459 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@139 -- # nvmfappstart --wait-for-rpc -L nvmf_auth 00:18:20.459 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:20.459 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:20.459 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:20.459 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@481 -- # nvmfpid=343339 00:18:20.459 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@482 -- # waitforlisten 343339 00:18:20.459 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc -L nvmf_auth 00:18:20.459 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 343339 ']' 00:18:20.459 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:20.459 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:20.459 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:20.459 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:20.459 12:04:07 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@140 -- # trap 'dumplogs; cleanup' SIGINT SIGTERM EXIT 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@142 -- # waitforlisten 343339 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@829 -- # '[' -z 343339 ']' 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:21.397 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@862 -- # return 0 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@143 -- # rpc_cmd 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.397 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.656 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.656 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@153 -- # connect_authenticate sha512 ffdhe8192 3 00:18:21.656 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@34 -- # local digest dhgroup key ckey qpairs 00:18:21.656 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # digest=sha512 00:18:21.656 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # dhgroup=ffdhe8192 00:18:21.656 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@36 -- # key=key3 00:18:21.656 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@37 -- # ckey=(${ckeys[$3]:+--dhchap-ctrlr-key "ckey$3"}) 00:18:21.656 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@39 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:21.656 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:21.656 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:21.656 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:21.656 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@40 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:21.656 12:04:08 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:22.225 00:18:22.225 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # hostrpc bdev_nvme_get_controllers 00:18:22.225 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # jq -r '.[].name' 00:18:22.225 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:22.225 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@44 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:22.225 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # rpc_cmd nvmf_subsystem_get_qpairs nqn.2024-03.io.spdk:cnode0 00:18:22.225 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:22.225 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:22.225 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:22.225 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@45 -- # qpairs='[ 00:18:22.225 { 00:18:22.225 "cntlid": 1, 00:18:22.225 "qid": 0, 00:18:22.225 "state": "enabled", 00:18:22.225 "thread": "nvmf_tgt_poll_group_000", 00:18:22.225 "listen_address": { 00:18:22.225 "trtype": "TCP", 00:18:22.225 "adrfam": "IPv4", 00:18:22.225 "traddr": "10.0.0.2", 00:18:22.225 "trsvcid": "4420" 00:18:22.225 }, 00:18:22.225 "peer_address": { 00:18:22.225 "trtype": "TCP", 00:18:22.225 "adrfam": "IPv4", 00:18:22.225 "traddr": "10.0.0.1", 00:18:22.225 "trsvcid": "41898" 00:18:22.225 }, 00:18:22.225 "auth": { 00:18:22.225 "state": "completed", 00:18:22.225 "digest": "sha512", 00:18:22.225 "dhgroup": "ffdhe8192" 00:18:22.225 } 00:18:22.225 } 00:18:22.225 ]' 00:18:22.225 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # jq -r '.[0].auth.digest' 00:18:22.225 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@46 -- # [[ sha512 == \s\h\a\5\1\2 ]] 00:18:22.225 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # jq -r '.[0].auth.dhgroup' 00:18:22.225 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@47 -- # [[ ffdhe8192 == \f\f\d\h\e\8\1\9\2 ]] 00:18:22.225 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # jq -r '.[0].auth.state' 00:18:22.485 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@48 -- # [[ completed == \c\o\m\p\l\e\t\e\d ]] 00:18:22.485 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@49 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:22.485 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:22.485 12:04:09 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@52 -- # nvme connect -t tcp -a 10.0.0.2 -n nqn.2024-03.io.spdk:cnode0 -i 1 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid 80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-secret DHHC-1:03:OGU5N2RhZGEyYjA0ZmUwMmExNmNiNjdjN2FjZWE5ODc1ZWNkMzdhZTAzYmRkMWIwNGM5YjJlMmJjNGM4YTE0Yml/xOk=: 00:18:23.054 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@55 -- # nvme disconnect -n nqn.2024-03.io.spdk:cnode0 00:18:23.054 NQN:nqn.2024-03.io.spdk:cnode0 disconnected 1 controller(s) 00:18:23.054 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@56 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:23.054 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.054 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.054 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.054 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@156 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --dhchap-key key3 00:18:23.054 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:23.054 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:23.054 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:23.054 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@157 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256 00:18:23.054 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256 00:18:23.314 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@158 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.314 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:23.314 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.314 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:23.314 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:23.314 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:23.314 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:23.314 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.314 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.574 request: 00:18:23.574 { 00:18:23.574 "name": "nvme0", 00:18:23.574 "trtype": "tcp", 00:18:23.574 "traddr": "10.0.0.2", 00:18:23.574 "adrfam": "ipv4", 00:18:23.574 "trsvcid": "4420", 00:18:23.574 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:23.574 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:23.574 "prchk_reftag": false, 00:18:23.574 "prchk_guard": false, 00:18:23.574 "hdgst": false, 00:18:23.574 "ddgst": false, 00:18:23.574 "dhchap_key": "key3", 00:18:23.574 "method": "bdev_nvme_attach_controller", 00:18:23.574 "req_id": 1 00:18:23.574 } 00:18:23.574 Got JSON-RPC error response 00:18:23.574 response: 00:18:23.574 { 00:18:23.574 "code": -5, 00:18:23.574 "message": "Input/output error" 00:18:23.574 } 00:18:23.574 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:23.574 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:23.574 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:23.574 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:23.574 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # IFS=, 00:18:23.574 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@164 -- # printf %s sha256,sha384,sha512 00:18:23.574 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@163 -- # hostrpc bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:23.574 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-dhgroups ffdhe2048 --dhchap-digests sha256,sha384,sha512 00:18:23.574 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@169 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.574 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:23.574 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.574 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:23.574 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:23.574 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:23.574 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:23.574 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.574 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key3 00:18:23.835 request: 00:18:23.835 { 00:18:23.835 "name": "nvme0", 00:18:23.835 "trtype": "tcp", 00:18:23.835 "traddr": "10.0.0.2", 00:18:23.835 "adrfam": "ipv4", 00:18:23.835 "trsvcid": "4420", 00:18:23.835 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:23.835 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:23.835 "prchk_reftag": false, 00:18:23.835 "prchk_guard": false, 00:18:23.835 "hdgst": false, 00:18:23.835 "ddgst": false, 00:18:23.835 "dhchap_key": "key3", 00:18:23.835 "method": "bdev_nvme_attach_controller", 00:18:23.835 "req_id": 1 00:18:23.835 } 00:18:23.835 Got JSON-RPC error response 00:18:23.835 response: 00:18:23.835 { 00:18:23.835 "code": -5, 00:18:23.835 "message": "Input/output error" 00:18:23.835 } 00:18:23.835 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:23.835 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:23.835 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:23.835 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:23.835 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:23.835 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s sha256,sha384,sha512 00:18:23.835 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # IFS=, 00:18:23.835 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@176 -- # printf %s null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:23.835 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@175 -- # hostrpc bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:23.835 12:04:10 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups null,ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:18:24.095 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@186 -- # rpc_cmd nvmf_subsystem_remove_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:24.095 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.095 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.095 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.095 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@187 -- # rpc_cmd nvmf_subsystem_add_host nqn.2024-03.io.spdk:cnode0 nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:24.095 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:24.095 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:24.095 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:24.095 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@188 -- # NOT hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:24.095 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@648 -- # local es=0 00:18:24.095 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@650 -- # valid_exec_arg hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:24.095 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@636 -- # local arg=hostrpc 00:18:24.095 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:24.095 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # type -t hostrpc 00:18:24.095 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:18:24.095 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:24.095 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key key1 00:18:24.095 request: 00:18:24.095 { 00:18:24.095 "name": "nvme0", 00:18:24.095 "trtype": "tcp", 00:18:24.095 "traddr": "10.0.0.2", 00:18:24.095 "adrfam": "ipv4", 00:18:24.095 "trsvcid": "4420", 00:18:24.095 "subnqn": "nqn.2024-03.io.spdk:cnode0", 00:18:24.095 "hostnqn": "nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562", 00:18:24.095 "prchk_reftag": false, 00:18:24.095 "prchk_guard": false, 00:18:24.095 "hdgst": false, 00:18:24.095 "ddgst": false, 00:18:24.095 "dhchap_key": "key0", 00:18:24.095 "dhchap_ctrlr_key": "key1", 00:18:24.095 "method": "bdev_nvme_attach_controller", 00:18:24.095 "req_id": 1 00:18:24.095 } 00:18:24.095 Got JSON-RPC error response 00:18:24.095 response: 00:18:24.095 { 00:18:24.095 "code": -5, 00:18:24.095 "message": "Input/output error" 00:18:24.095 } 00:18:24.386 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@651 -- # es=1 00:18:24.386 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:18:24.386 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:18:24.386 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:18:24.386 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@192 -- # hostrpc bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:24.386 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.2 -s 4420 -q nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 -n nqn.2024-03.io.spdk:cnode0 --dhchap-key key0 00:18:24.386 00:18:24.386 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # hostrpc bdev_nvme_get_controllers 00:18:24.386 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # jq -r '.[].name' 00:18:24.386 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_get_controllers 00:18:24.646 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@195 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:18:24.646 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@196 -- # hostrpc bdev_nvme_detach_controller nvme0 00:18:24.646 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/host.sock bdev_nvme_detach_controller nvme0 00:18:24.906 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@198 -- # trap - SIGINT SIGTERM EXIT 00:18:24.906 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@199 -- # cleanup 00:18:24.906 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@21 -- # killprocess 322766 00:18:24.906 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 322766 ']' 00:18:24.906 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 322766 00:18:24.906 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:24.906 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:24.906 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 322766 00:18:24.906 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:18:24.906 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:18:24.906 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 322766' 00:18:24.906 killing process with pid 322766 00:18:24.906 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 322766 00:18:24.906 12:04:11 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 322766 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@22 -- # nvmftestfini 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@117 -- # sync 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@120 -- # set +e 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:25.167 rmmod nvme_tcp 00:18:25.167 rmmod nvme_fabrics 00:18:25.167 rmmod nvme_keyring 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@124 -- # set -e 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@125 -- # return 0 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@489 -- # '[' -n 343339 ']' 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@490 -- # killprocess 343339 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@948 -- # '[' -z 343339 ']' 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@952 -- # kill -0 343339 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # uname 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 343339 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@966 -- # echo 'killing process with pid 343339' 00:18:25.167 killing process with pid 343339 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@967 -- # kill 343339 00:18:25.167 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@972 -- # wait 343339 00:18:25.427 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:25.427 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:25.427 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:25.427 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:25.427 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:25.427 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:25.427 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:25.427 12:04:12 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.965 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:27.965 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- target/auth.sh@23 -- # rm -f /tmp/spdk.key-null.DdM /tmp/spdk.key-sha256.Hqz /tmp/spdk.key-sha384.nfo /tmp/spdk.key-sha512.OlH /tmp/spdk.key-sha512.kIp /tmp/spdk.key-sha384.vXx /tmp/spdk.key-sha256.I7M '' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf-auth.log 00:18:27.965 00:18:27.965 real 2m8.755s 00:18:27.966 user 4m56.336s 00:18:27.966 sys 0m18.812s 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_auth_target -- common/autotest_common.sh@10 -- # set +x 00:18:27.966 ************************************ 00:18:27.966 END TEST nvmf_auth_target 00:18:27.966 ************************************ 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@39 -- # '[' tcp = tcp ']' 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@40 -- # run_test nvmf_bdevio_no_huge /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 4 -le 1 ']' 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:27.966 ************************************ 00:18:27.966 START TEST nvmf_bdevio_no_huge 00:18:27.966 ************************************ 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevio.sh --transport=tcp --no-hugepages 00:18:27.966 * Looking for test storage... 00:18:27.966 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # uname -s 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@5 -- # export PATH 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@47 -- # : 0 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@11 -- # MALLOC_BDEV_SIZE=64 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@14 -- # nvmftestinit 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@285 -- # xtrace_disable 00:18:27.966 12:04:14 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # pci_devs=() 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # net_devs=() 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # e810=() 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@296 -- # local -ga e810 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # x722=() 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@297 -- # local -ga x722 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # mlx=() 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@298 -- # local -ga mlx 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:33.247 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:33.247 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:33.247 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:33.248 Found net devices under 0000:86:00.0: cvl_0_0 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:33.248 Found net devices under 0000:86:00.1: cvl_0_1 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@414 -- # is_hw=yes 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:33.248 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:33.248 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:18:33.248 00:18:33.248 --- 10.0.0.2 ping statistics --- 00:18:33.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.248 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:33.248 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:33.248 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.579 ms 00:18:33.248 00:18:33.248 --- 10.0.0.1 ping statistics --- 00:18:33.248 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:33.248 rtt min/avg/max/mdev = 0.579/0.579/0.579/0.000 ms 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@422 -- # return 0 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:33.248 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:33.508 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:33.508 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:33.508 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@16 -- # nvmfappstart -m 0x78 00:18:33.508 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:33.508 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:33.508 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:33.508 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@481 -- # nvmfpid=347602 00:18:33.508 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@482 -- # waitforlisten 347602 00:18:33.508 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --no-huge -s 1024 -m 0x78 00:18:33.508 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@829 -- # '[' -z 347602 ']' 00:18:33.508 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:33.508 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:33.508 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:33.508 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:33.508 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:33.508 12:04:20 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:33.508 [2024-07-25 12:04:20.569957] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:18:33.508 [2024-07-25 12:04:20.570011] [ DPDK EAL parameters: nvmf -c 0x78 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk0 --proc-type=auto ] 00:18:33.508 [2024-07-25 12:04:20.633870] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:18:33.508 [2024-07-25 12:04:20.717362] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:33.508 [2024-07-25 12:04:20.717401] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:33.508 [2024-07-25 12:04:20.717408] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:33.508 [2024-07-25 12:04:20.717413] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:33.508 [2024-07-25 12:04:20.717421] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:33.508 [2024-07-25 12:04:20.717556] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:18:33.508 [2024-07-25 12:04:20.717669] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:18:33.508 [2024-07-25 12:04:20.717775] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:18:33.508 [2024-07-25 12:04:20.717776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@862 -- # return 0 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@18 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:34.447 [2024-07-25 12:04:21.412710] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:34.447 Malloc0 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@20 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@21 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@22 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:34.447 [2024-07-25 12:04:21.448947] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/bdev/bdevio/bdevio --json /dev/fd/62 --no-huge -s 1024 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@24 -- # gen_nvmf_target_json 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # config=() 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@532 -- # local subsystem config 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:18:34.447 { 00:18:34.447 "params": { 00:18:34.447 "name": "Nvme$subsystem", 00:18:34.447 "trtype": "$TEST_TRANSPORT", 00:18:34.447 "traddr": "$NVMF_FIRST_TARGET_IP", 00:18:34.447 "adrfam": "ipv4", 00:18:34.447 "trsvcid": "$NVMF_PORT", 00:18:34.447 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:18:34.447 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:18:34.447 "hdgst": ${hdgst:-false}, 00:18:34.447 "ddgst": ${ddgst:-false} 00:18:34.447 }, 00:18:34.447 "method": "bdev_nvme_attach_controller" 00:18:34.447 } 00:18:34.447 EOF 00:18:34.447 )") 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@554 -- # cat 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@556 -- # jq . 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@557 -- # IFS=, 00:18:34.447 12:04:21 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:18:34.447 "params": { 00:18:34.447 "name": "Nvme1", 00:18:34.447 "trtype": "tcp", 00:18:34.447 "traddr": "10.0.0.2", 00:18:34.447 "adrfam": "ipv4", 00:18:34.447 "trsvcid": "4420", 00:18:34.447 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:18:34.447 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:18:34.447 "hdgst": false, 00:18:34.447 "ddgst": false 00:18:34.447 }, 00:18:34.447 "method": "bdev_nvme_attach_controller" 00:18:34.447 }' 00:18:34.447 [2024-07-25 12:04:21.496403] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:18:34.447 [2024-07-25 12:04:21.496452] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 1024 --no-huge --iova-mode=va --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --file-prefix=spdk_pid347853 ] 00:18:34.447 [2024-07-25 12:04:21.554228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:18:34.447 [2024-07-25 12:04:21.640466] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:34.447 [2024-07-25 12:04:21.640563] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:18:34.447 [2024-07-25 12:04:21.640565] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:34.706 I/O targets: 00:18:34.706 Nvme1n1: 131072 blocks of 512 bytes (64 MiB) 00:18:34.706 00:18:34.706 00:18:34.706 CUnit - A unit testing framework for C - Version 2.1-3 00:18:34.706 http://cunit.sourceforge.net/ 00:18:34.706 00:18:34.706 00:18:34.706 Suite: bdevio tests on: Nvme1n1 00:18:34.966 Test: blockdev write read block ...passed 00:18:34.966 Test: blockdev write zeroes read block ...passed 00:18:34.966 Test: blockdev write zeroes read no split ...passed 00:18:34.966 Test: blockdev write zeroes read split ...passed 00:18:34.966 Test: blockdev write zeroes read split partial ...passed 00:18:34.966 Test: blockdev reset ...[2024-07-25 12:04:22.111491] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:18:34.966 [2024-07-25 12:04:22.111553] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23bd300 (9): Bad file descriptor 00:18:34.966 [2024-07-25 12:04:22.165853] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:18:34.966 passed 00:18:34.966 Test: blockdev write read 8 blocks ...passed 00:18:35.225 Test: blockdev write read size > 128k ...passed 00:18:35.225 Test: blockdev write read invalid size ...passed 00:18:35.225 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:18:35.225 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:18:35.225 Test: blockdev write read max offset ...passed 00:18:35.225 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:18:35.225 Test: blockdev writev readv 8 blocks ...passed 00:18:35.225 Test: blockdev writev readv 30 x 1block ...passed 00:18:35.225 Test: blockdev writev readv block ...passed 00:18:35.225 Test: blockdev writev readv size > 128k ...passed 00:18:35.225 Test: blockdev writev readv size > 128k in two iovs ...passed 00:18:35.225 Test: blockdev comparev and writev ...[2024-07-25 12:04:22.448282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:35.225 [2024-07-25 12:04:22.448310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:18:35.225 [2024-07-25 12:04:22.448328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:35.226 [2024-07-25 12:04:22.448336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:18:35.226 [2024-07-25 12:04:22.448927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:35.226 [2024-07-25 12:04:22.448938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:18:35.226 [2024-07-25 12:04:22.448950] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:35.226 [2024-07-25 12:04:22.448957] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:18:35.226 [2024-07-25 12:04:22.449532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:35.226 [2024-07-25 12:04:22.449543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:0 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:18:35.226 [2024-07-25 12:04:22.449555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:35.226 [2024-07-25 12:04:22.449562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:1 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:18:35.226 [2024-07-25 12:04:22.450128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:1 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:35.226 [2024-07-25 12:04:22.450139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:1 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:18:35.226 [2024-07-25 12:04:22.450150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:0 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x200 00:18:35.226 [2024-07-25 12:04:22.450158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - FAILED FUSED (00/09) qid:1 cid:0 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:18:35.485 passed 00:18:35.485 Test: blockdev nvme passthru rw ...passed 00:18:35.485 Test: blockdev nvme passthru vendor specific ...[2024-07-25 12:04:22.535982] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:35.485 [2024-07-25 12:04:22.535996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:18:35.485 [2024-07-25 12:04:22.536481] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:35.485 [2024-07-25 12:04:22.536491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:18:35.485 [2024-07-25 12:04:22.536875] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:35.485 [2024-07-25 12:04:22.536885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:18:35.486 [2024-07-25 12:04:22.537271] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:18:35.486 [2024-07-25 12:04:22.537281] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:0 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:18:35.486 passed 00:18:35.486 Test: blockdev nvme admin passthru ...passed 00:18:35.486 Test: blockdev copy ...passed 00:18:35.486 00:18:35.486 Run Summary: Type Total Ran Passed Failed Inactive 00:18:35.486 suites 1 1 n/a 0 0 00:18:35.486 tests 23 23 23 0 0 00:18:35.486 asserts 152 152 152 0 n/a 00:18:35.486 00:18:35.486 Elapsed time = 1.374 seconds 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@26 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@559 -- # xtrace_disable 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@28 -- # trap - SIGINT SIGTERM EXIT 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- target/bdevio.sh@30 -- # nvmftestfini 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@488 -- # nvmfcleanup 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@117 -- # sync 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@120 -- # set +e 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@121 -- # for i in {1..20} 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:18:35.746 rmmod nvme_tcp 00:18:35.746 rmmod nvme_fabrics 00:18:35.746 rmmod nvme_keyring 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@124 -- # set -e 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@125 -- # return 0 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@489 -- # '[' -n 347602 ']' 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@490 -- # killprocess 347602 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@948 -- # '[' -z 347602 ']' 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@952 -- # kill -0 347602 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # uname 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 347602 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@954 -- # process_name=reactor_3 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@958 -- # '[' reactor_3 = sudo ']' 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@966 -- # echo 'killing process with pid 347602' 00:18:35.746 killing process with pid 347602 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@967 -- # kill 347602 00:18:35.746 12:04:22 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@972 -- # wait 347602 00:18:36.316 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:18:36.316 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:18:36.316 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:18:36.316 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:18:36.316 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@278 -- # remove_spdk_ns 00:18:36.316 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:36.316 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:36.316 12:04:23 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.224 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:18:38.224 00:18:38.224 real 0m10.590s 00:18:38.224 user 0m14.112s 00:18:38.224 sys 0m5.150s 00:18:38.224 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@1124 -- # xtrace_disable 00:18:38.224 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_bdevio_no_huge -- common/autotest_common.sh@10 -- # set +x 00:18:38.224 ************************************ 00:18:38.224 END TEST nvmf_bdevio_no_huge 00:18:38.224 ************************************ 00:18:38.224 12:04:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:18:38.224 12:04:25 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@41 -- # run_test nvmf_tls /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:38.224 12:04:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:18:38.224 12:04:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:18:38.224 12:04:25 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:18:38.224 ************************************ 00:18:38.224 START TEST nvmf_tls 00:18:38.224 ************************************ 00:18:38.224 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/tls.sh --transport=tcp 00:18:38.224 * Looking for test storage... 00:18:38.484 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # uname -s 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@5 -- # export PATH 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@47 -- # : 0 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@51 -- # have_pci_nics=0 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@62 -- # nvmftestinit 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@448 -- # prepare_net_devs 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@410 -- # local -g is_hw=no 00:18:38.484 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@412 -- # remove_spdk_ns 00:18:38.485 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:18:38.485 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:18:38.485 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:18:38.485 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:18:38.485 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:18:38.485 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@285 -- # xtrace_disable 00:18:38.485 12:04:25 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # pci_devs=() 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@291 -- # local -a pci_devs 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # pci_net_devs=() 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # pci_drivers=() 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@293 -- # local -A pci_drivers 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # net_devs=() 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@295 -- # local -ga net_devs 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # e810=() 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@296 -- # local -ga e810 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # x722=() 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@297 -- # local -ga x722 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # mlx=() 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@298 -- # local -ga mlx 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:18:43.763 Found 0000:86:00.0 (0x8086 - 0x159b) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:18:43.763 Found 0000:86:00.1 (0x8086 - 0x159b) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:18:43.763 Found net devices under 0000:86:00.0: cvl_0_0 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@390 -- # [[ up == up ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:18:43.763 Found net devices under 0000:86:00.1: cvl_0_1 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@414 -- # is_hw=yes 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:18:43.763 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:18:43.764 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:18:43.764 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.168 ms 00:18:43.764 00:18:43.764 --- 10.0.0.2 ping statistics --- 00:18:43.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.764 rtt min/avg/max/mdev = 0.168/0.168/0.168/0.000 ms 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:18:43.764 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:18:43.764 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:18:43.764 00:18:43.764 --- 10.0.0.1 ping statistics --- 00:18:43.764 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:18:43.764 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@422 -- # return 0 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@63 -- # nvmfappstart -m 0x2 --wait-for-rpc 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=351498 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 --wait-for-rpc 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 351498 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 351498 ']' 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:18:43.764 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:43.764 12:04:30 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:43.764 [2024-07-25 12:04:30.729823] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:18:43.764 [2024-07-25 12:04:30.729868] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:18:43.764 EAL: No free 2048 kB hugepages reported on node 1 00:18:43.764 [2024-07-25 12:04:30.788775] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:43.764 [2024-07-25 12:04:30.865209] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:18:43.764 [2024-07-25 12:04:30.865272] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:18:43.764 [2024-07-25 12:04:30.865280] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:18:43.764 [2024-07-25 12:04:30.865286] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:18:43.764 [2024-07-25 12:04:30.865291] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:18:43.764 [2024-07-25 12:04:30.865326] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:18:44.332 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:44.332 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:44.332 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:18:44.332 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:18:44.332 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:44.332 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:18:44.332 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@65 -- # '[' tcp '!=' tcp ']' 00:18:44.332 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@70 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_set_default_impl -i ssl 00:18:44.592 true 00:18:44.592 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:44.592 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # jq -r .tls_version 00:18:44.852 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@73 -- # version=0 00:18:44.852 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@74 -- # [[ 0 != \0 ]] 00:18:44.852 12:04:31 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:44.852 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:44.852 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # jq -r .tls_version 00:18:45.112 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@81 -- # version=13 00:18:45.112 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@82 -- # [[ 13 != \1\3 ]] 00:18:45.112 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 7 00:18:45.372 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:45.372 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # jq -r .tls_version 00:18:45.372 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@89 -- # version=7 00:18:45.372 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@90 -- # [[ 7 != \7 ]] 00:18:45.372 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:45.372 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # jq -r .enable_ktls 00:18:45.632 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@96 -- # ktls=false 00:18:45.632 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@97 -- # [[ false != \f\a\l\s\e ]] 00:18:45.632 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --enable-ktls 00:18:45.946 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:45.946 12:04:32 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # jq -r .enable_ktls 00:18:45.946 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@104 -- # ktls=true 00:18:45.946 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@105 -- # [[ true != \t\r\u\e ]] 00:18:45.946 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --disable-ktls 00:18:46.215 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_get_options -i ssl 00:18:46.215 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # jq -r .enable_ktls 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@112 -- # ktls=false 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@113 -- # [[ false != \f\a\l\s\e ]] 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # format_interchange_psk 00112233445566778899aabbccddeeff 1 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 1 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@118 -- # key=NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # format_interchange_psk ffeeddccbbaa99887766554433221100 1 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 ffeeddccbbaa99887766554433221100 1 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=ffeeddccbbaa99887766554433221100 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=1 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@119 -- # key_2=NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # mktemp 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@121 -- # key_path=/tmp/tmp.FZbANw7H6Y 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # mktemp 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@122 -- # key_2_path=/tmp/tmp.pZ5hTSuDrV 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@124 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@125 -- # echo -n NVMeTLSkey-1:01:ZmZlZWRkY2NiYmFhOTk4ODc3NjY1NTQ0MzMyMjExMDBfBm/Y: 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@127 -- # chmod 0600 /tmp/tmp.FZbANw7H6Y 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@128 -- # chmod 0600 /tmp/tmp.pZ5hTSuDrV 00:18:46.475 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@130 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py sock_impl_set_options -i ssl --tls-version 13 00:18:46.735 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@131 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_start_init 00:18:46.735 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@133 -- # setup_nvmf_tgt /tmp/tmp.FZbANw7H6Y 00:18:46.735 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.FZbANw7H6Y 00:18:46.735 12:04:33 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:18:46.995 [2024-07-25 12:04:34.141686] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:18:46.995 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:18:47.254 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:18:47.255 [2024-07-25 12:04:34.478542] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:18:47.255 [2024-07-25 12:04:34.478755] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:18:47.255 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:18:47.514 malloc0 00:18:47.514 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:18:47.773 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FZbANw7H6Y 00:18:47.773 [2024-07-25 12:04:34.984066] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:18:47.773 12:04:34 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@137 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -S ssl -q 64 -o 4096 -w randrw -M 30 -t 10 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 hostnqn:nqn.2016-06.io.spdk:host1' --psk-path /tmp/tmp.FZbANw7H6Y 00:18:47.773 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.986 Initializing NVMe Controllers 00:18:59.986 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:18:59.986 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:18:59.986 Initialization complete. Launching workers. 00:18:59.986 ======================================================== 00:18:59.986 Latency(us) 00:18:59.986 Device Information : IOPS MiB/s Average min max 00:18:59.986 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 16528.89 64.57 3872.39 768.51 6633.02 00:18:59.986 ======================================================== 00:18:59.986 Total : 16528.89 64.57 3872.39 768.51 6633.02 00:18:59.986 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@143 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.FZbANw7H6Y 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FZbANw7H6Y' 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=353955 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 353955 /var/tmp/bdevperf.sock 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 353955 ']' 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:18:59.986 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:18:59.986 [2024-07-25 12:04:45.148558] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:18:59.986 [2024-07-25 12:04:45.148608] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid353955 ] 00:18:59.986 EAL: No free 2048 kB hugepages reported on node 1 00:18:59.986 [2024-07-25 12:04:45.197701] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:18:59.986 [2024-07-25 12:04:45.275970] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:18:59.986 12:04:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FZbANw7H6Y 00:18:59.986 [2024-07-25 12:04:46.113629] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:18:59.986 [2024-07-25 12:04:46.113698] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:18:59.986 TLSTESTn1 00:18:59.986 12:04:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:18:59.986 Running I/O for 10 seconds... 00:19:09.971 00:19:09.971 Latency(us) 00:19:09.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.971 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:09.971 Verification LBA range: start 0x0 length 0x2000 00:19:09.971 TLSTESTn1 : 10.09 1246.61 4.87 0.00 0.00 102334.72 7151.97 154095.08 00:19:09.971 =================================================================================================================== 00:19:09.971 Total : 1246.61 4.87 0.00 0.00 102334.72 7151.97 154095.08 00:19:09.971 0 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 353955 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 353955 ']' 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 353955 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 353955 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 353955' 00:19:09.971 killing process with pid 353955 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 353955 00:19:09.971 Received shutdown signal, test time was about 10.000000 seconds 00:19:09.971 00:19:09.971 Latency(us) 00:19:09.971 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:09.971 =================================================================================================================== 00:19:09.971 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:09.971 [2024-07-25 12:04:56.489745] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 353955 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@146 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pZ5hTSuDrV 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pZ5hTSuDrV 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:09.971 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:09.972 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.pZ5hTSuDrV 00:19:09.972 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:09.972 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:09.972 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:09.972 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.pZ5hTSuDrV' 00:19:09.972 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:09.972 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=355791 00:19:09.972 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:09.972 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:09.972 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 355791 /var/tmp/bdevperf.sock 00:19:09.972 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 355791 ']' 00:19:09.972 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:09.972 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:09.972 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:09.972 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:09.972 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:09.972 12:04:56 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:09.972 [2024-07-25 12:04:56.717963] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:19:09.972 [2024-07-25 12:04:56.718011] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid355791 ] 00:19:09.972 EAL: No free 2048 kB hugepages reported on node 1 00:19:09.972 [2024-07-25 12:04:56.767231] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:09.972 [2024-07-25 12:04:56.836294] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:10.541 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:10.541 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:10.541 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.pZ5hTSuDrV 00:19:10.541 [2024-07-25 12:04:57.690623] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:10.541 [2024-07-25 12:04:57.690695] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:10.541 [2024-07-25 12:04:57.700390] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:10.541 [2024-07-25 12:04:57.701282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cca570 (107): Transport endpoint is not connected 00:19:10.541 [2024-07-25 12:04:57.702274] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1cca570 (9): Bad file descriptor 00:19:10.541 [2024-07-25 12:04:57.703276] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:10.541 [2024-07-25 12:04:57.703285] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:10.541 [2024-07-25 12:04:57.703295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:10.541 request: 00:19:10.541 { 00:19:10.541 "name": "TLSTEST", 00:19:10.541 "trtype": "tcp", 00:19:10.541 "traddr": "10.0.0.2", 00:19:10.541 "adrfam": "ipv4", 00:19:10.541 "trsvcid": "4420", 00:19:10.541 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:10.541 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:10.541 "prchk_reftag": false, 00:19:10.541 "prchk_guard": false, 00:19:10.541 "hdgst": false, 00:19:10.541 "ddgst": false, 00:19:10.541 "psk": "/tmp/tmp.pZ5hTSuDrV", 00:19:10.541 "method": "bdev_nvme_attach_controller", 00:19:10.542 "req_id": 1 00:19:10.542 } 00:19:10.542 Got JSON-RPC error response 00:19:10.542 response: 00:19:10.542 { 00:19:10.542 "code": -5, 00:19:10.542 "message": "Input/output error" 00:19:10.542 } 00:19:10.542 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 355791 00:19:10.542 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 355791 ']' 00:19:10.542 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 355791 00:19:10.542 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:10.542 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:10.542 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 355791 00:19:10.542 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:10.542 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:10.542 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 355791' 00:19:10.542 killing process with pid 355791 00:19:10.542 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 355791 00:19:10.542 Received shutdown signal, test time was about 10.000000 seconds 00:19:10.542 00:19:10.542 Latency(us) 00:19:10.542 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:10.542 =================================================================================================================== 00:19:10.542 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:10.542 [2024-07-25 12:04:57.770948] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:10.542 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 355791 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@149 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FZbANw7H6Y 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FZbANw7H6Y 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host2 /tmp/tmp.FZbANw7H6Y 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host2 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FZbANw7H6Y' 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=356028 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 356028 /var/tmp/bdevperf.sock 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 356028 ']' 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:10.802 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:10.802 12:04:57 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:10.802 [2024-07-25 12:04:57.991056] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:19:10.802 [2024-07-25 12:04:57.991107] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid356028 ] 00:19:10.802 EAL: No free 2048 kB hugepages reported on node 1 00:19:10.802 [2024-07-25 12:04:58.041369] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:11.062 [2024-07-25 12:04:58.111886] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:11.630 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:11.630 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:11.630 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host2 --psk /tmp/tmp.FZbANw7H6Y 00:19:11.890 [2024-07-25 12:04:58.953427] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:11.890 [2024-07-25 12:04:58.953500] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:11.890 [2024-07-25 12:04:58.963543] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:11.890 [2024-07-25 12:04:58.963567] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host2 nqn.2016-06.io.spdk:cnode1 00:19:11.890 [2024-07-25 12:04:58.963591] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:11.890 [2024-07-25 12:04:58.964985] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9f570 (107): Transport endpoint is not connected 00:19:11.890 [2024-07-25 12:04:58.965978] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1f9f570 (9): Bad file descriptor 00:19:11.890 [2024-07-25 12:04:58.966984] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:11.890 [2024-07-25 12:04:58.966993] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:11.890 [2024-07-25 12:04:58.967002] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:11.890 request: 00:19:11.890 { 00:19:11.890 "name": "TLSTEST", 00:19:11.890 "trtype": "tcp", 00:19:11.890 "traddr": "10.0.0.2", 00:19:11.890 "adrfam": "ipv4", 00:19:11.890 "trsvcid": "4420", 00:19:11.890 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:11.890 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:19:11.890 "prchk_reftag": false, 00:19:11.890 "prchk_guard": false, 00:19:11.890 "hdgst": false, 00:19:11.890 "ddgst": false, 00:19:11.890 "psk": "/tmp/tmp.FZbANw7H6Y", 00:19:11.890 "method": "bdev_nvme_attach_controller", 00:19:11.890 "req_id": 1 00:19:11.890 } 00:19:11.890 Got JSON-RPC error response 00:19:11.890 response: 00:19:11.890 { 00:19:11.890 "code": -5, 00:19:11.890 "message": "Input/output error" 00:19:11.890 } 00:19:11.890 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 356028 00:19:11.890 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 356028 ']' 00:19:11.890 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 356028 00:19:11.890 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:11.890 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:11.890 12:04:58 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 356028 00:19:11.890 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:11.890 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:11.890 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 356028' 00:19:11.890 killing process with pid 356028 00:19:11.890 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 356028 00:19:11.890 Received shutdown signal, test time was about 10.000000 seconds 00:19:11.890 00:19:11.890 Latency(us) 00:19:11.890 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:11.890 =================================================================================================================== 00:19:11.890 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:11.890 [2024-07-25 12:04:59.033564] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:11.890 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 356028 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@152 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FZbANw7H6Y 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FZbANw7H6Y 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode2 nqn.2016-06.io.spdk:host1 /tmp/tmp.FZbANw7H6Y 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode2 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.FZbANw7H6Y' 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=356260 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 356260 /var/tmp/bdevperf.sock 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 356260 ']' 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:12.150 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:12.150 12:04:59 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:12.150 [2024-07-25 12:04:59.256712] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:19:12.150 [2024-07-25 12:04:59.256762] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid356260 ] 00:19:12.150 EAL: No free 2048 kB hugepages reported on node 1 00:19:12.150 [2024-07-25 12:04:59.306458] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:12.150 [2024-07-25 12:04:59.375191] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:13.088 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:13.088 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:13.088 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.FZbANw7H6Y 00:19:13.088 [2024-07-25 12:05:00.216808] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:13.088 [2024-07-25 12:05:00.216886] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:13.088 [2024-07-25 12:05:00.226752] tcp.c: 894:tcp_sock_get_key: *ERROR*: Could not find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:13.088 [2024-07-25 12:05:00.226778] posix.c: 574:posix_sock_psk_find_session_server_cb: *ERROR*: Unable to find PSK for identity: NVMe0R01 nqn.2016-06.io.spdk:host1 nqn.2016-06.io.spdk:cnode2 00:19:13.088 [2024-07-25 12:05:00.226802] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:13.088 [2024-07-25 12:05:00.227494] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1145570 (107): Transport endpoint is not connected 00:19:13.088 [2024-07-25 12:05:00.228486] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1145570 (9): Bad file descriptor 00:19:13.088 [2024-07-25 12:05:00.229488] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:19:13.088 [2024-07-25 12:05:00.229498] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:13.088 [2024-07-25 12:05:00.229507] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:19:13.088 request: 00:19:13.088 { 00:19:13.088 "name": "TLSTEST", 00:19:13.088 "trtype": "tcp", 00:19:13.088 "traddr": "10.0.0.2", 00:19:13.088 "adrfam": "ipv4", 00:19:13.088 "trsvcid": "4420", 00:19:13.088 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:19:13.088 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:13.088 "prchk_reftag": false, 00:19:13.088 "prchk_guard": false, 00:19:13.088 "hdgst": false, 00:19:13.088 "ddgst": false, 00:19:13.088 "psk": "/tmp/tmp.FZbANw7H6Y", 00:19:13.088 "method": "bdev_nvme_attach_controller", 00:19:13.088 "req_id": 1 00:19:13.088 } 00:19:13.088 Got JSON-RPC error response 00:19:13.088 response: 00:19:13.088 { 00:19:13.088 "code": -5, 00:19:13.088 "message": "Input/output error" 00:19:13.088 } 00:19:13.088 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 356260 00:19:13.088 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 356260 ']' 00:19:13.088 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 356260 00:19:13.088 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:13.088 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:13.088 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 356260 00:19:13.088 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:13.088 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:13.088 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 356260' 00:19:13.088 killing process with pid 356260 00:19:13.089 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 356260 00:19:13.089 Received shutdown signal, test time was about 10.000000 seconds 00:19:13.089 00:19:13.089 Latency(us) 00:19:13.089 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.089 =================================================================================================================== 00:19:13.089 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:13.089 [2024-07-25 12:05:00.293404] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:13.089 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 356260 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@155 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 '' 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk= 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=356504 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 356504 /var/tmp/bdevperf.sock 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 356504 ']' 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:13.348 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:13.349 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:13.349 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:13.349 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:13.349 12:05:00 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:13.349 [2024-07-25 12:05:00.519203] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:19:13.349 [2024-07-25 12:05:00.519251] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid356504 ] 00:19:13.349 EAL: No free 2048 kB hugepages reported on node 1 00:19:13.349 [2024-07-25 12:05:00.568683] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:13.608 [2024-07-25 12:05:00.638262] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:14.177 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:14.177 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:14.177 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:14.437 [2024-07-25 12:05:01.471503] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:19:14.437 [2024-07-25 12:05:01.473399] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xe51af0 (9): Bad file descriptor 00:19:14.437 [2024-07-25 12:05:01.474397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:19:14.437 [2024-07-25 12:05:01.474407] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 10.0.0.2 00:19:14.437 [2024-07-25 12:05:01.474416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:19:14.437 request: 00:19:14.437 { 00:19:14.437 "name": "TLSTEST", 00:19:14.437 "trtype": "tcp", 00:19:14.437 "traddr": "10.0.0.2", 00:19:14.437 "adrfam": "ipv4", 00:19:14.437 "trsvcid": "4420", 00:19:14.437 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:14.437 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:14.437 "prchk_reftag": false, 00:19:14.437 "prchk_guard": false, 00:19:14.437 "hdgst": false, 00:19:14.437 "ddgst": false, 00:19:14.437 "method": "bdev_nvme_attach_controller", 00:19:14.437 "req_id": 1 00:19:14.437 } 00:19:14.437 Got JSON-RPC error response 00:19:14.437 response: 00:19:14.437 { 00:19:14.437 "code": -5, 00:19:14.437 "message": "Input/output error" 00:19:14.437 } 00:19:14.437 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 356504 00:19:14.437 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 356504 ']' 00:19:14.437 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 356504 00:19:14.437 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:14.437 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:14.437 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 356504 00:19:14.437 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:14.437 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:14.437 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 356504' 00:19:14.437 killing process with pid 356504 00:19:14.437 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 356504 00:19:14.437 Received shutdown signal, test time was about 10.000000 seconds 00:19:14.437 00:19:14.437 Latency(us) 00:19:14.437 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:14.437 =================================================================================================================== 00:19:14.437 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:14.437 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 356504 00:19:14.697 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:14.697 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:14.697 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:14.697 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:14.697 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:14.697 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@158 -- # killprocess 351498 00:19:14.697 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 351498 ']' 00:19:14.697 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 351498 00:19:14.697 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:14.697 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:14.697 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 351498 00:19:14.697 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:14.697 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:14.697 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 351498' 00:19:14.697 killing process with pid 351498 00:19:14.697 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 351498 00:19:14.697 [2024-07-25 12:05:01.757082] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:14.697 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 351498 00:19:14.989 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # format_interchange_psk 00112233445566778899aabbccddeeff0011223344556677 2 00:19:14.989 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff0011223344556677 2 00:19:14.989 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@702 -- # local prefix key digest 00:19:14.989 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:19:14.989 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff0011223344556677 00:19:14.989 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@704 -- # digest=2 00:19:14.989 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@705 -- # python - 00:19:14.989 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@159 -- # key_long=NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:14.989 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # mktemp 00:19:14.989 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@160 -- # key_long_path=/tmp/tmp.t8XFWvtk1m 00:19:14.989 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@161 -- # echo -n NVMeTLSkey-1:02:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmYwMDExMjIzMzQ0NTU2Njc3wWXNJw==: 00:19:14.989 12:05:01 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@162 -- # chmod 0600 /tmp/tmp.t8XFWvtk1m 00:19:14.989 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@163 -- # nvmfappstart -m 0x2 00:19:14.989 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:14.989 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:14.989 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.989 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=356752 00:19:14.989 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 356752 00:19:14.989 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:14.989 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 356752 ']' 00:19:14.989 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:14.989 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:14.989 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:14.989 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:14.989 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:14.989 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:14.989 [2024-07-25 12:05:02.058234] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:19:14.989 [2024-07-25 12:05:02.058281] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:14.989 EAL: No free 2048 kB hugepages reported on node 1 00:19:14.989 [2024-07-25 12:05:02.114213] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:14.989 [2024-07-25 12:05:02.185072] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:14.989 [2024-07-25 12:05:02.185109] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:14.989 [2024-07-25 12:05:02.185115] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:14.989 [2024-07-25 12:05:02.185121] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:14.990 [2024-07-25 12:05:02.185125] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:14.990 [2024-07-25 12:05:02.185158] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:15.928 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:15.928 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:15.928 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:15.928 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:15.928 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:15.928 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:15.928 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@165 -- # setup_nvmf_tgt /tmp/tmp.t8XFWvtk1m 00:19:15.928 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.t8XFWvtk1m 00:19:15.928 12:05:02 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:15.928 [2024-07-25 12:05:03.040593] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:15.928 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:16.236 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:16.236 [2024-07-25 12:05:03.385478] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:16.236 [2024-07-25 12:05:03.385667] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:16.237 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:16.498 malloc0 00:19:16.498 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:16.498 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.t8XFWvtk1m 00:19:16.757 [2024-07-25 12:05:03.882691] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:16.757 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@167 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.t8XFWvtk1m 00:19:16.757 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:16.757 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:16.757 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:16.757 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.t8XFWvtk1m' 00:19:16.758 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:16.758 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:16.758 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=357016 00:19:16.758 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:16.758 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 357016 /var/tmp/bdevperf.sock 00:19:16.758 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 357016 ']' 00:19:16.758 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:16.758 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:16.758 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:16.758 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:16.758 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:16.758 12:05:03 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:16.758 [2024-07-25 12:05:03.940642] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:19:16.758 [2024-07-25 12:05:03.940688] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid357016 ] 00:19:16.758 EAL: No free 2048 kB hugepages reported on node 1 00:19:16.758 [2024-07-25 12:05:03.990159] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:17.017 [2024-07-25 12:05:04.064689] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:17.586 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:17.586 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:17.586 12:05:04 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.t8XFWvtk1m 00:19:17.845 [2024-07-25 12:05:04.907510] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:17.845 [2024-07-25 12:05:04.907579] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:17.845 TLSTESTn1 00:19:17.845 12:05:05 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:18.105 Running I/O for 10 seconds... 00:19:28.093 00:19:28.093 Latency(us) 00:19:28.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.093 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:28.093 Verification LBA range: start 0x0 length 0x2000 00:19:28.093 TLSTESTn1 : 10.12 1225.69 4.79 0.00 0.00 103945.83 7123.48 155006.89 00:19:28.093 =================================================================================================================== 00:19:28.093 Total : 1225.69 4.79 0.00 0.00 103945.83 7123.48 155006.89 00:19:28.093 0 00:19:28.093 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@44 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:28.093 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@45 -- # killprocess 357016 00:19:28.093 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 357016 ']' 00:19:28.093 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 357016 00:19:28.093 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:28.093 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:28.093 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 357016 00:19:28.093 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:28.093 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:28.093 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 357016' 00:19:28.093 killing process with pid 357016 00:19:28.093 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 357016 00:19:28.093 Received shutdown signal, test time was about 10.000000 seconds 00:19:28.093 00:19:28.093 Latency(us) 00:19:28.093 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:28.093 =================================================================================================================== 00:19:28.093 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:28.093 [2024-07-25 12:05:15.302435] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:28.093 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 357016 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@170 -- # chmod 0666 /tmp/tmp.t8XFWvtk1m 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@171 -- # NOT run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.t8XFWvtk1m 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.t8XFWvtk1m 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=run_bdevperf 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t run_bdevperf 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # run_bdevperf nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 /tmp/tmp.t8XFWvtk1m 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@22 -- # local subnqn hostnqn psk 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # subnqn=nqn.2016-06.io.spdk:cnode1 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # hostnqn=nqn.2016-06.io.spdk:host1 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@23 -- # psk='--psk /tmp/tmp.t8XFWvtk1m' 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@25 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@28 -- # bdevperf_pid=358869 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@30 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@31 -- # waitforlisten 358869 /var/tmp/bdevperf.sock 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 358869 ']' 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:28.354 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:28.354 12:05:15 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:28.354 [2024-07-25 12:05:15.540686] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:19:28.354 [2024-07-25 12:05:15.540736] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid358869 ] 00:19:28.354 EAL: No free 2048 kB hugepages reported on node 1 00:19:28.354 [2024-07-25 12:05:15.591235] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:28.614 [2024-07-25 12:05:15.663477] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:29.189 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:29.189 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:29.189 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.t8XFWvtk1m 00:19:29.450 [2024-07-25 12:05:16.501726] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:29.450 [2024-07-25 12:05:16.501777] bdev_nvme.c:6153:bdev_nvme_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:29.450 [2024-07-25 12:05:16.501784] bdev_nvme.c:6258:bdev_nvme_create: *ERROR*: Could not load PSK from /tmp/tmp.t8XFWvtk1m 00:19:29.450 request: 00:19:29.450 { 00:19:29.450 "name": "TLSTEST", 00:19:29.450 "trtype": "tcp", 00:19:29.450 "traddr": "10.0.0.2", 00:19:29.450 "adrfam": "ipv4", 00:19:29.450 "trsvcid": "4420", 00:19:29.450 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:29.450 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:29.450 "prchk_reftag": false, 00:19:29.450 "prchk_guard": false, 00:19:29.450 "hdgst": false, 00:19:29.450 "ddgst": false, 00:19:29.450 "psk": "/tmp/tmp.t8XFWvtk1m", 00:19:29.450 "method": "bdev_nvme_attach_controller", 00:19:29.450 "req_id": 1 00:19:29.450 } 00:19:29.450 Got JSON-RPC error response 00:19:29.450 response: 00:19:29.450 { 00:19:29.450 "code": -1, 00:19:29.450 "message": "Operation not permitted" 00:19:29.450 } 00:19:29.450 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@36 -- # killprocess 358869 00:19:29.450 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 358869 ']' 00:19:29.450 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 358869 00:19:29.450 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:29.450 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:29.450 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 358869 00:19:29.450 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:29.450 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:29.450 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 358869' 00:19:29.450 killing process with pid 358869 00:19:29.450 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 358869 00:19:29.450 Received shutdown signal, test time was about 10.000000 seconds 00:19:29.450 00:19:29.450 Latency(us) 00:19:29.450 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:29.450 =================================================================================================================== 00:19:29.450 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:29.450 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 358869 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@37 -- # return 1 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@174 -- # killprocess 356752 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 356752 ']' 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 356752 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 356752 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 356752' 00:19:29.710 killing process with pid 356752 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 356752 00:19:29.710 [2024-07-25 12:05:16.762526] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 356752 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@175 -- # nvmfappstart -m 0x2 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=359148 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 359148 00:19:29.710 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:29.970 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 359148 ']' 00:19:29.970 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:29.970 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:29.970 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:29.970 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:29.970 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:29.970 12:05:16 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:29.970 [2024-07-25 12:05:17.006656] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:19:29.970 [2024-07-25 12:05:17.006704] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:29.970 EAL: No free 2048 kB hugepages reported on node 1 00:19:29.970 [2024-07-25 12:05:17.065118] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:29.970 [2024-07-25 12:05:17.138335] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:29.970 [2024-07-25 12:05:17.138373] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:29.970 [2024-07-25 12:05:17.138381] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:29.970 [2024-07-25 12:05:17.138388] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:29.970 [2024-07-25 12:05:17.138393] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:29.970 [2024-07-25 12:05:17.138411] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:30.909 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:30.909 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:30.909 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:30.909 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:30.909 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:30.909 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:30.909 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@177 -- # NOT setup_nvmf_tgt /tmp/tmp.t8XFWvtk1m 00:19:30.909 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@648 -- # local es=0 00:19:30.909 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@650 -- # valid_exec_arg setup_nvmf_tgt /tmp/tmp.t8XFWvtk1m 00:19:30.909 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@636 -- # local arg=setup_nvmf_tgt 00:19:30.909 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:30.909 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # type -t setup_nvmf_tgt 00:19:30.909 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:19:30.909 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # setup_nvmf_tgt /tmp/tmp.t8XFWvtk1m 00:19:30.909 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.t8XFWvtk1m 00:19:30.909 12:05:17 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:30.909 [2024-07-25 12:05:18.001869] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:30.909 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:31.169 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:31.169 [2024-07-25 12:05:18.350770] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:31.169 [2024-07-25 12:05:18.350966] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:31.169 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:31.428 malloc0 00:19:31.428 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:31.688 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.t8XFWvtk1m 00:19:31.688 [2024-07-25 12:05:18.856311] tcp.c:3635:tcp_load_psk: *ERROR*: Incorrect permissions for PSK file 00:19:31.688 [2024-07-25 12:05:18.856341] tcp.c:3721:nvmf_tcp_subsystem_add_host: *ERROR*: Could not retrieve PSK from file 00:19:31.688 [2024-07-25 12:05:18.856363] subsystem.c:1052:spdk_nvmf_subsystem_add_host_ext: *ERROR*: Unable to add host to TCP transport 00:19:31.688 request: 00:19:31.688 { 00:19:31.688 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:31.688 "host": "nqn.2016-06.io.spdk:host1", 00:19:31.688 "psk": "/tmp/tmp.t8XFWvtk1m", 00:19:31.688 "method": "nvmf_subsystem_add_host", 00:19:31.688 "req_id": 1 00:19:31.688 } 00:19:31.688 Got JSON-RPC error response 00:19:31.688 response: 00:19:31.688 { 00:19:31.688 "code": -32603, 00:19:31.688 "message": "Internal error" 00:19:31.688 } 00:19:31.688 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@651 -- # es=1 00:19:31.688 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:19:31.688 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:19:31.688 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:19:31.688 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@180 -- # killprocess 359148 00:19:31.688 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 359148 ']' 00:19:31.689 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 359148 00:19:31.689 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:31.689 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:31.689 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 359148 00:19:31.689 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:31.689 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:31.689 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 359148' 00:19:31.689 killing process with pid 359148 00:19:31.689 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 359148 00:19:31.689 12:05:18 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 359148 00:19:31.948 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@181 -- # chmod 0600 /tmp/tmp.t8XFWvtk1m 00:19:31.948 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@184 -- # nvmfappstart -m 0x2 00:19:31.948 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:31.948 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:31.948 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.948 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=359584 00:19:31.948 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:19:31.948 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 359584 00:19:31.948 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 359584 ']' 00:19:31.948 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:31.948 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:31.948 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:31.948 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:31.948 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:31.948 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:31.948 [2024-07-25 12:05:19.169813] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:19:31.948 [2024-07-25 12:05:19.169858] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:31.948 EAL: No free 2048 kB hugepages reported on node 1 00:19:32.207 [2024-07-25 12:05:19.227406] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:32.208 [2024-07-25 12:05:19.296441] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:32.208 [2024-07-25 12:05:19.296478] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:32.208 [2024-07-25 12:05:19.296485] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:32.208 [2024-07-25 12:05:19.296491] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:32.208 [2024-07-25 12:05:19.296496] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:32.208 [2024-07-25 12:05:19.296513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:32.776 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:32.776 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:32.776 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:32.776 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:32.776 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:32.776 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:32.776 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@185 -- # setup_nvmf_tgt /tmp/tmp.t8XFWvtk1m 00:19:32.776 12:05:19 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.t8XFWvtk1m 00:19:32.776 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:33.036 [2024-07-25 12:05:20.156565] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:33.036 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:33.295 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:33.295 [2024-07-25 12:05:20.493423] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:33.295 [2024-07-25 12:05:20.493622] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:33.295 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:33.554 malloc0 00:19:33.554 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:33.814 12:05:20 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.t8XFWvtk1m 00:19:33.814 [2024-07-25 12:05:21.014910] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:33.814 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@188 -- # bdevperf_pid=359854 00:19:33.814 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@190 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:33.814 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@191 -- # waitforlisten 359854 /var/tmp/bdevperf.sock 00:19:33.814 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@187 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:19:33.814 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 359854 ']' 00:19:33.814 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:33.814 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:33.814 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:33.814 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:33.814 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:33.814 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:34.074 [2024-07-25 12:05:21.068458] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:19:34.074 [2024-07-25 12:05:21.068505] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid359854 ] 00:19:34.074 EAL: No free 2048 kB hugepages reported on node 1 00:19:34.074 [2024-07-25 12:05:21.117849] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:34.074 [2024-07-25 12:05:21.196465] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:34.643 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:34.643 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:34.643 12:05:21 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@192 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.t8XFWvtk1m 00:19:34.902 [2024-07-25 12:05:22.019290] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:34.902 [2024-07-25 12:05:22.019360] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:34.902 TLSTESTn1 00:19:34.902 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py save_config 00:19:35.162 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@196 -- # tgtconf='{ 00:19:35.162 "subsystems": [ 00:19:35.162 { 00:19:35.162 "subsystem": "keyring", 00:19:35.162 "config": [] 00:19:35.162 }, 00:19:35.162 { 00:19:35.162 "subsystem": "iobuf", 00:19:35.162 "config": [ 00:19:35.162 { 00:19:35.162 "method": "iobuf_set_options", 00:19:35.162 "params": { 00:19:35.162 "small_pool_count": 8192, 00:19:35.162 "large_pool_count": 1024, 00:19:35.162 "small_bufsize": 8192, 00:19:35.162 "large_bufsize": 135168 00:19:35.162 } 00:19:35.162 } 00:19:35.162 ] 00:19:35.162 }, 00:19:35.162 { 00:19:35.162 "subsystem": "sock", 00:19:35.162 "config": [ 00:19:35.162 { 00:19:35.162 "method": "sock_set_default_impl", 00:19:35.162 "params": { 00:19:35.162 "impl_name": "posix" 00:19:35.162 } 00:19:35.162 }, 00:19:35.162 { 00:19:35.162 "method": "sock_impl_set_options", 00:19:35.162 "params": { 00:19:35.162 "impl_name": "ssl", 00:19:35.162 "recv_buf_size": 4096, 00:19:35.162 "send_buf_size": 4096, 00:19:35.162 "enable_recv_pipe": true, 00:19:35.162 "enable_quickack": false, 00:19:35.162 "enable_placement_id": 0, 00:19:35.162 "enable_zerocopy_send_server": true, 00:19:35.162 "enable_zerocopy_send_client": false, 00:19:35.162 "zerocopy_threshold": 0, 00:19:35.162 "tls_version": 0, 00:19:35.162 "enable_ktls": false 00:19:35.162 } 00:19:35.162 }, 00:19:35.162 { 00:19:35.162 "method": "sock_impl_set_options", 00:19:35.162 "params": { 00:19:35.162 "impl_name": "posix", 00:19:35.162 "recv_buf_size": 2097152, 00:19:35.162 "send_buf_size": 2097152, 00:19:35.162 "enable_recv_pipe": true, 00:19:35.162 "enable_quickack": false, 00:19:35.162 "enable_placement_id": 0, 00:19:35.162 "enable_zerocopy_send_server": true, 00:19:35.162 "enable_zerocopy_send_client": false, 00:19:35.162 "zerocopy_threshold": 0, 00:19:35.162 "tls_version": 0, 00:19:35.162 "enable_ktls": false 00:19:35.162 } 00:19:35.162 } 00:19:35.162 ] 00:19:35.162 }, 00:19:35.162 { 00:19:35.162 "subsystem": "vmd", 00:19:35.162 "config": [] 00:19:35.162 }, 00:19:35.162 { 00:19:35.162 "subsystem": "accel", 00:19:35.162 "config": [ 00:19:35.162 { 00:19:35.162 "method": "accel_set_options", 00:19:35.162 "params": { 00:19:35.162 "small_cache_size": 128, 00:19:35.162 "large_cache_size": 16, 00:19:35.162 "task_count": 2048, 00:19:35.162 "sequence_count": 2048, 00:19:35.162 "buf_count": 2048 00:19:35.162 } 00:19:35.162 } 00:19:35.162 ] 00:19:35.162 }, 00:19:35.162 { 00:19:35.162 "subsystem": "bdev", 00:19:35.162 "config": [ 00:19:35.162 { 00:19:35.162 "method": "bdev_set_options", 00:19:35.162 "params": { 00:19:35.162 "bdev_io_pool_size": 65535, 00:19:35.162 "bdev_io_cache_size": 256, 00:19:35.162 "bdev_auto_examine": true, 00:19:35.162 "iobuf_small_cache_size": 128, 00:19:35.162 "iobuf_large_cache_size": 16 00:19:35.162 } 00:19:35.162 }, 00:19:35.162 { 00:19:35.162 "method": "bdev_raid_set_options", 00:19:35.162 "params": { 00:19:35.162 "process_window_size_kb": 1024, 00:19:35.163 "process_max_bandwidth_mb_sec": 0 00:19:35.163 } 00:19:35.163 }, 00:19:35.163 { 00:19:35.163 "method": "bdev_iscsi_set_options", 00:19:35.163 "params": { 00:19:35.163 "timeout_sec": 30 00:19:35.163 } 00:19:35.163 }, 00:19:35.163 { 00:19:35.163 "method": "bdev_nvme_set_options", 00:19:35.163 "params": { 00:19:35.163 "action_on_timeout": "none", 00:19:35.163 "timeout_us": 0, 00:19:35.163 "timeout_admin_us": 0, 00:19:35.163 "keep_alive_timeout_ms": 10000, 00:19:35.163 "arbitration_burst": 0, 00:19:35.163 "low_priority_weight": 0, 00:19:35.163 "medium_priority_weight": 0, 00:19:35.163 "high_priority_weight": 0, 00:19:35.163 "nvme_adminq_poll_period_us": 10000, 00:19:35.163 "nvme_ioq_poll_period_us": 0, 00:19:35.163 "io_queue_requests": 0, 00:19:35.163 "delay_cmd_submit": true, 00:19:35.163 "transport_retry_count": 4, 00:19:35.163 "bdev_retry_count": 3, 00:19:35.163 "transport_ack_timeout": 0, 00:19:35.163 "ctrlr_loss_timeout_sec": 0, 00:19:35.163 "reconnect_delay_sec": 0, 00:19:35.163 "fast_io_fail_timeout_sec": 0, 00:19:35.163 "disable_auto_failback": false, 00:19:35.163 "generate_uuids": false, 00:19:35.163 "transport_tos": 0, 00:19:35.163 "nvme_error_stat": false, 00:19:35.163 "rdma_srq_size": 0, 00:19:35.163 "io_path_stat": false, 00:19:35.163 "allow_accel_sequence": false, 00:19:35.163 "rdma_max_cq_size": 0, 00:19:35.163 "rdma_cm_event_timeout_ms": 0, 00:19:35.163 "dhchap_digests": [ 00:19:35.163 "sha256", 00:19:35.163 "sha384", 00:19:35.163 "sha512" 00:19:35.163 ], 00:19:35.163 "dhchap_dhgroups": [ 00:19:35.163 "null", 00:19:35.163 "ffdhe2048", 00:19:35.163 "ffdhe3072", 00:19:35.163 "ffdhe4096", 00:19:35.163 "ffdhe6144", 00:19:35.163 "ffdhe8192" 00:19:35.163 ] 00:19:35.163 } 00:19:35.163 }, 00:19:35.163 { 00:19:35.163 "method": "bdev_nvme_set_hotplug", 00:19:35.163 "params": { 00:19:35.163 "period_us": 100000, 00:19:35.163 "enable": false 00:19:35.163 } 00:19:35.163 }, 00:19:35.163 { 00:19:35.163 "method": "bdev_malloc_create", 00:19:35.163 "params": { 00:19:35.163 "name": "malloc0", 00:19:35.163 "num_blocks": 8192, 00:19:35.163 "block_size": 4096, 00:19:35.163 "physical_block_size": 4096, 00:19:35.163 "uuid": "b0b7d350-053b-41b5-a586-f6562726e1d6", 00:19:35.163 "optimal_io_boundary": 0, 00:19:35.163 "md_size": 0, 00:19:35.163 "dif_type": 0, 00:19:35.163 "dif_is_head_of_md": false, 00:19:35.163 "dif_pi_format": 0 00:19:35.163 } 00:19:35.163 }, 00:19:35.163 { 00:19:35.163 "method": "bdev_wait_for_examine" 00:19:35.163 } 00:19:35.163 ] 00:19:35.163 }, 00:19:35.163 { 00:19:35.163 "subsystem": "nbd", 00:19:35.163 "config": [] 00:19:35.163 }, 00:19:35.163 { 00:19:35.163 "subsystem": "scheduler", 00:19:35.163 "config": [ 00:19:35.163 { 00:19:35.163 "method": "framework_set_scheduler", 00:19:35.163 "params": { 00:19:35.163 "name": "static" 00:19:35.163 } 00:19:35.163 } 00:19:35.163 ] 00:19:35.163 }, 00:19:35.163 { 00:19:35.163 "subsystem": "nvmf", 00:19:35.163 "config": [ 00:19:35.163 { 00:19:35.163 "method": "nvmf_set_config", 00:19:35.163 "params": { 00:19:35.163 "discovery_filter": "match_any", 00:19:35.163 "admin_cmd_passthru": { 00:19:35.163 "identify_ctrlr": false 00:19:35.163 } 00:19:35.163 } 00:19:35.163 }, 00:19:35.163 { 00:19:35.163 "method": "nvmf_set_max_subsystems", 00:19:35.163 "params": { 00:19:35.163 "max_subsystems": 1024 00:19:35.163 } 00:19:35.163 }, 00:19:35.163 { 00:19:35.163 "method": "nvmf_set_crdt", 00:19:35.163 "params": { 00:19:35.163 "crdt1": 0, 00:19:35.163 "crdt2": 0, 00:19:35.163 "crdt3": 0 00:19:35.163 } 00:19:35.163 }, 00:19:35.163 { 00:19:35.163 "method": "nvmf_create_transport", 00:19:35.163 "params": { 00:19:35.163 "trtype": "TCP", 00:19:35.163 "max_queue_depth": 128, 00:19:35.163 "max_io_qpairs_per_ctrlr": 127, 00:19:35.163 "in_capsule_data_size": 4096, 00:19:35.163 "max_io_size": 131072, 00:19:35.163 "io_unit_size": 131072, 00:19:35.163 "max_aq_depth": 128, 00:19:35.163 "num_shared_buffers": 511, 00:19:35.163 "buf_cache_size": 4294967295, 00:19:35.163 "dif_insert_or_strip": false, 00:19:35.163 "zcopy": false, 00:19:35.163 "c2h_success": false, 00:19:35.163 "sock_priority": 0, 00:19:35.163 "abort_timeout_sec": 1, 00:19:35.163 "ack_timeout": 0, 00:19:35.163 "data_wr_pool_size": 0 00:19:35.163 } 00:19:35.163 }, 00:19:35.163 { 00:19:35.163 "method": "nvmf_create_subsystem", 00:19:35.163 "params": { 00:19:35.163 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.163 "allow_any_host": false, 00:19:35.163 "serial_number": "SPDK00000000000001", 00:19:35.163 "model_number": "SPDK bdev Controller", 00:19:35.163 "max_namespaces": 10, 00:19:35.163 "min_cntlid": 1, 00:19:35.163 "max_cntlid": 65519, 00:19:35.163 "ana_reporting": false 00:19:35.163 } 00:19:35.163 }, 00:19:35.163 { 00:19:35.163 "method": "nvmf_subsystem_add_host", 00:19:35.163 "params": { 00:19:35.163 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.163 "host": "nqn.2016-06.io.spdk:host1", 00:19:35.163 "psk": "/tmp/tmp.t8XFWvtk1m" 00:19:35.163 } 00:19:35.163 }, 00:19:35.163 { 00:19:35.163 "method": "nvmf_subsystem_add_ns", 00:19:35.163 "params": { 00:19:35.163 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.163 "namespace": { 00:19:35.163 "nsid": 1, 00:19:35.163 "bdev_name": "malloc0", 00:19:35.163 "nguid": "B0B7D350053B41B5A586F6562726E1D6", 00:19:35.163 "uuid": "b0b7d350-053b-41b5-a586-f6562726e1d6", 00:19:35.163 "no_auto_visible": false 00:19:35.163 } 00:19:35.163 } 00:19:35.163 }, 00:19:35.163 { 00:19:35.163 "method": "nvmf_subsystem_add_listener", 00:19:35.163 "params": { 00:19:35.163 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.163 "listen_address": { 00:19:35.163 "trtype": "TCP", 00:19:35.163 "adrfam": "IPv4", 00:19:35.163 "traddr": "10.0.0.2", 00:19:35.163 "trsvcid": "4420" 00:19:35.164 }, 00:19:35.164 "secure_channel": true 00:19:35.164 } 00:19:35.164 } 00:19:35.164 ] 00:19:35.164 } 00:19:35.164 ] 00:19:35.164 }' 00:19:35.164 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:35.424 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@197 -- # bdevperfconf='{ 00:19:35.424 "subsystems": [ 00:19:35.424 { 00:19:35.424 "subsystem": "keyring", 00:19:35.424 "config": [] 00:19:35.424 }, 00:19:35.424 { 00:19:35.424 "subsystem": "iobuf", 00:19:35.424 "config": [ 00:19:35.424 { 00:19:35.424 "method": "iobuf_set_options", 00:19:35.424 "params": { 00:19:35.424 "small_pool_count": 8192, 00:19:35.424 "large_pool_count": 1024, 00:19:35.424 "small_bufsize": 8192, 00:19:35.424 "large_bufsize": 135168 00:19:35.424 } 00:19:35.424 } 00:19:35.424 ] 00:19:35.424 }, 00:19:35.424 { 00:19:35.424 "subsystem": "sock", 00:19:35.424 "config": [ 00:19:35.424 { 00:19:35.424 "method": "sock_set_default_impl", 00:19:35.424 "params": { 00:19:35.424 "impl_name": "posix" 00:19:35.424 } 00:19:35.424 }, 00:19:35.424 { 00:19:35.424 "method": "sock_impl_set_options", 00:19:35.424 "params": { 00:19:35.424 "impl_name": "ssl", 00:19:35.424 "recv_buf_size": 4096, 00:19:35.424 "send_buf_size": 4096, 00:19:35.424 "enable_recv_pipe": true, 00:19:35.424 "enable_quickack": false, 00:19:35.424 "enable_placement_id": 0, 00:19:35.424 "enable_zerocopy_send_server": true, 00:19:35.424 "enable_zerocopy_send_client": false, 00:19:35.424 "zerocopy_threshold": 0, 00:19:35.424 "tls_version": 0, 00:19:35.424 "enable_ktls": false 00:19:35.424 } 00:19:35.424 }, 00:19:35.424 { 00:19:35.424 "method": "sock_impl_set_options", 00:19:35.424 "params": { 00:19:35.424 "impl_name": "posix", 00:19:35.424 "recv_buf_size": 2097152, 00:19:35.424 "send_buf_size": 2097152, 00:19:35.424 "enable_recv_pipe": true, 00:19:35.424 "enable_quickack": false, 00:19:35.424 "enable_placement_id": 0, 00:19:35.424 "enable_zerocopy_send_server": true, 00:19:35.424 "enable_zerocopy_send_client": false, 00:19:35.424 "zerocopy_threshold": 0, 00:19:35.424 "tls_version": 0, 00:19:35.424 "enable_ktls": false 00:19:35.424 } 00:19:35.424 } 00:19:35.424 ] 00:19:35.424 }, 00:19:35.424 { 00:19:35.424 "subsystem": "vmd", 00:19:35.424 "config": [] 00:19:35.424 }, 00:19:35.424 { 00:19:35.424 "subsystem": "accel", 00:19:35.424 "config": [ 00:19:35.424 { 00:19:35.424 "method": "accel_set_options", 00:19:35.424 "params": { 00:19:35.424 "small_cache_size": 128, 00:19:35.424 "large_cache_size": 16, 00:19:35.424 "task_count": 2048, 00:19:35.424 "sequence_count": 2048, 00:19:35.424 "buf_count": 2048 00:19:35.424 } 00:19:35.424 } 00:19:35.424 ] 00:19:35.424 }, 00:19:35.424 { 00:19:35.424 "subsystem": "bdev", 00:19:35.424 "config": [ 00:19:35.424 { 00:19:35.424 "method": "bdev_set_options", 00:19:35.424 "params": { 00:19:35.424 "bdev_io_pool_size": 65535, 00:19:35.424 "bdev_io_cache_size": 256, 00:19:35.424 "bdev_auto_examine": true, 00:19:35.424 "iobuf_small_cache_size": 128, 00:19:35.424 "iobuf_large_cache_size": 16 00:19:35.424 } 00:19:35.425 }, 00:19:35.425 { 00:19:35.425 "method": "bdev_raid_set_options", 00:19:35.425 "params": { 00:19:35.425 "process_window_size_kb": 1024, 00:19:35.425 "process_max_bandwidth_mb_sec": 0 00:19:35.425 } 00:19:35.425 }, 00:19:35.425 { 00:19:35.425 "method": "bdev_iscsi_set_options", 00:19:35.425 "params": { 00:19:35.425 "timeout_sec": 30 00:19:35.425 } 00:19:35.425 }, 00:19:35.425 { 00:19:35.425 "method": "bdev_nvme_set_options", 00:19:35.425 "params": { 00:19:35.425 "action_on_timeout": "none", 00:19:35.425 "timeout_us": 0, 00:19:35.425 "timeout_admin_us": 0, 00:19:35.425 "keep_alive_timeout_ms": 10000, 00:19:35.425 "arbitration_burst": 0, 00:19:35.425 "low_priority_weight": 0, 00:19:35.425 "medium_priority_weight": 0, 00:19:35.425 "high_priority_weight": 0, 00:19:35.425 "nvme_adminq_poll_period_us": 10000, 00:19:35.425 "nvme_ioq_poll_period_us": 0, 00:19:35.425 "io_queue_requests": 512, 00:19:35.425 "delay_cmd_submit": true, 00:19:35.425 "transport_retry_count": 4, 00:19:35.425 "bdev_retry_count": 3, 00:19:35.425 "transport_ack_timeout": 0, 00:19:35.425 "ctrlr_loss_timeout_sec": 0, 00:19:35.425 "reconnect_delay_sec": 0, 00:19:35.425 "fast_io_fail_timeout_sec": 0, 00:19:35.425 "disable_auto_failback": false, 00:19:35.425 "generate_uuids": false, 00:19:35.425 "transport_tos": 0, 00:19:35.425 "nvme_error_stat": false, 00:19:35.425 "rdma_srq_size": 0, 00:19:35.425 "io_path_stat": false, 00:19:35.425 "allow_accel_sequence": false, 00:19:35.425 "rdma_max_cq_size": 0, 00:19:35.425 "rdma_cm_event_timeout_ms": 0, 00:19:35.425 "dhchap_digests": [ 00:19:35.425 "sha256", 00:19:35.425 "sha384", 00:19:35.425 "sha512" 00:19:35.425 ], 00:19:35.425 "dhchap_dhgroups": [ 00:19:35.425 "null", 00:19:35.425 "ffdhe2048", 00:19:35.425 "ffdhe3072", 00:19:35.425 "ffdhe4096", 00:19:35.425 "ffdhe6144", 00:19:35.425 "ffdhe8192" 00:19:35.425 ] 00:19:35.425 } 00:19:35.425 }, 00:19:35.425 { 00:19:35.425 "method": "bdev_nvme_attach_controller", 00:19:35.425 "params": { 00:19:35.425 "name": "TLSTEST", 00:19:35.425 "trtype": "TCP", 00:19:35.425 "adrfam": "IPv4", 00:19:35.425 "traddr": "10.0.0.2", 00:19:35.425 "trsvcid": "4420", 00:19:35.425 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.425 "prchk_reftag": false, 00:19:35.425 "prchk_guard": false, 00:19:35.425 "ctrlr_loss_timeout_sec": 0, 00:19:35.425 "reconnect_delay_sec": 0, 00:19:35.425 "fast_io_fail_timeout_sec": 0, 00:19:35.425 "psk": "/tmp/tmp.t8XFWvtk1m", 00:19:35.425 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:35.425 "hdgst": false, 00:19:35.425 "ddgst": false 00:19:35.425 } 00:19:35.425 }, 00:19:35.425 { 00:19:35.425 "method": "bdev_nvme_set_hotplug", 00:19:35.425 "params": { 00:19:35.425 "period_us": 100000, 00:19:35.425 "enable": false 00:19:35.425 } 00:19:35.425 }, 00:19:35.425 { 00:19:35.425 "method": "bdev_wait_for_examine" 00:19:35.425 } 00:19:35.425 ] 00:19:35.425 }, 00:19:35.425 { 00:19:35.425 "subsystem": "nbd", 00:19:35.425 "config": [] 00:19:35.425 } 00:19:35.425 ] 00:19:35.425 }' 00:19:35.425 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@199 -- # killprocess 359854 00:19:35.425 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 359854 ']' 00:19:35.425 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 359854 00:19:35.425 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:35.425 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:35.425 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 359854 00:19:35.702 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:35.702 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:35.702 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 359854' 00:19:35.702 killing process with pid 359854 00:19:35.702 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 359854 00:19:35.702 Received shutdown signal, test time was about 10.000000 seconds 00:19:35.702 00:19:35.702 Latency(us) 00:19:35.702 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:35.702 =================================================================================================================== 00:19:35.702 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:19:35.702 [2024-07-25 12:05:22.691238] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:35.702 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 359854 00:19:35.702 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@200 -- # killprocess 359584 00:19:35.702 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 359584 ']' 00:19:35.702 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 359584 00:19:35.702 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:35.702 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:35.702 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 359584 00:19:35.702 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:35.702 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:35.702 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 359584' 00:19:35.702 killing process with pid 359584 00:19:35.702 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 359584 00:19:35.702 [2024-07-25 12:05:22.917859] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:35.702 12:05:22 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 359584 00:19:35.962 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # nvmfappstart -m 0x2 -c /dev/fd/62 00:19:35.962 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:35.962 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:35.962 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@203 -- # echo '{ 00:19:35.962 "subsystems": [ 00:19:35.962 { 00:19:35.962 "subsystem": "keyring", 00:19:35.962 "config": [] 00:19:35.962 }, 00:19:35.962 { 00:19:35.962 "subsystem": "iobuf", 00:19:35.962 "config": [ 00:19:35.962 { 00:19:35.962 "method": "iobuf_set_options", 00:19:35.962 "params": { 00:19:35.962 "small_pool_count": 8192, 00:19:35.962 "large_pool_count": 1024, 00:19:35.962 "small_bufsize": 8192, 00:19:35.962 "large_bufsize": 135168 00:19:35.962 } 00:19:35.962 } 00:19:35.962 ] 00:19:35.962 }, 00:19:35.962 { 00:19:35.963 "subsystem": "sock", 00:19:35.963 "config": [ 00:19:35.963 { 00:19:35.963 "method": "sock_set_default_impl", 00:19:35.963 "params": { 00:19:35.963 "impl_name": "posix" 00:19:35.963 } 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "method": "sock_impl_set_options", 00:19:35.963 "params": { 00:19:35.963 "impl_name": "ssl", 00:19:35.963 "recv_buf_size": 4096, 00:19:35.963 "send_buf_size": 4096, 00:19:35.963 "enable_recv_pipe": true, 00:19:35.963 "enable_quickack": false, 00:19:35.963 "enable_placement_id": 0, 00:19:35.963 "enable_zerocopy_send_server": true, 00:19:35.963 "enable_zerocopy_send_client": false, 00:19:35.963 "zerocopy_threshold": 0, 00:19:35.963 "tls_version": 0, 00:19:35.963 "enable_ktls": false 00:19:35.963 } 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "method": "sock_impl_set_options", 00:19:35.963 "params": { 00:19:35.963 "impl_name": "posix", 00:19:35.963 "recv_buf_size": 2097152, 00:19:35.963 "send_buf_size": 2097152, 00:19:35.963 "enable_recv_pipe": true, 00:19:35.963 "enable_quickack": false, 00:19:35.963 "enable_placement_id": 0, 00:19:35.963 "enable_zerocopy_send_server": true, 00:19:35.963 "enable_zerocopy_send_client": false, 00:19:35.963 "zerocopy_threshold": 0, 00:19:35.963 "tls_version": 0, 00:19:35.963 "enable_ktls": false 00:19:35.963 } 00:19:35.963 } 00:19:35.963 ] 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "subsystem": "vmd", 00:19:35.963 "config": [] 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "subsystem": "accel", 00:19:35.963 "config": [ 00:19:35.963 { 00:19:35.963 "method": "accel_set_options", 00:19:35.963 "params": { 00:19:35.963 "small_cache_size": 128, 00:19:35.963 "large_cache_size": 16, 00:19:35.963 "task_count": 2048, 00:19:35.963 "sequence_count": 2048, 00:19:35.963 "buf_count": 2048 00:19:35.963 } 00:19:35.963 } 00:19:35.963 ] 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "subsystem": "bdev", 00:19:35.963 "config": [ 00:19:35.963 { 00:19:35.963 "method": "bdev_set_options", 00:19:35.963 "params": { 00:19:35.963 "bdev_io_pool_size": 65535, 00:19:35.963 "bdev_io_cache_size": 256, 00:19:35.963 "bdev_auto_examine": true, 00:19:35.963 "iobuf_small_cache_size": 128, 00:19:35.963 "iobuf_large_cache_size": 16 00:19:35.963 } 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "method": "bdev_raid_set_options", 00:19:35.963 "params": { 00:19:35.963 "process_window_size_kb": 1024, 00:19:35.963 "process_max_bandwidth_mb_sec": 0 00:19:35.963 } 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "method": "bdev_iscsi_set_options", 00:19:35.963 "params": { 00:19:35.963 "timeout_sec": 30 00:19:35.963 } 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "method": "bdev_nvme_set_options", 00:19:35.963 "params": { 00:19:35.963 "action_on_timeout": "none", 00:19:35.963 "timeout_us": 0, 00:19:35.963 "timeout_admin_us": 0, 00:19:35.963 "keep_alive_timeout_ms": 10000, 00:19:35.963 "arbitration_burst": 0, 00:19:35.963 "low_priority_weight": 0, 00:19:35.963 "medium_priority_weight": 0, 00:19:35.963 "high_priority_weight": 0, 00:19:35.963 "nvme_adminq_poll_period_us": 10000, 00:19:35.963 "nvme_ioq_poll_period_us": 0, 00:19:35.963 "io_queue_requests": 0, 00:19:35.963 "delay_cmd_submit": true, 00:19:35.963 "transport_retry_count": 4, 00:19:35.963 "bdev_retry_count": 3, 00:19:35.963 "transport_ack_timeout": 0, 00:19:35.963 "ctrlr_loss_timeout_sec": 0, 00:19:35.963 "reconnect_delay_sec": 0, 00:19:35.963 "fast_io_fail_timeout_sec": 0, 00:19:35.963 "disable_auto_failback": false, 00:19:35.963 "generate_uuids": false, 00:19:35.963 "transport_tos": 0, 00:19:35.963 "nvme_error_stat": false, 00:19:35.963 "rdma_srq_size": 0, 00:19:35.963 "io_path_stat": false, 00:19:35.963 "allow_accel_sequence": false, 00:19:35.963 "rdma_max_cq_size": 0, 00:19:35.963 "rdma_cm_event_timeout_ms": 0, 00:19:35.963 "dhchap_digests": [ 00:19:35.963 "sha256", 00:19:35.963 "sha384", 00:19:35.963 "sha512" 00:19:35.963 ], 00:19:35.963 "dhchap_dhgroups": [ 00:19:35.963 "null", 00:19:35.963 "ffdhe2048", 00:19:35.963 "ffdhe3072", 00:19:35.963 "ffdhe4096", 00:19:35.963 "ffdhe6144", 00:19:35.963 "ffdhe8192" 00:19:35.963 ] 00:19:35.963 } 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "method": "bdev_nvme_set_hotplug", 00:19:35.963 "params": { 00:19:35.963 "period_us": 100000, 00:19:35.963 "enable": false 00:19:35.963 } 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "method": "bdev_malloc_create", 00:19:35.963 "params": { 00:19:35.963 "name": "malloc0", 00:19:35.963 "num_blocks": 8192, 00:19:35.963 "block_size": 4096, 00:19:35.963 "physical_block_size": 4096, 00:19:35.963 "uuid": "b0b7d350-053b-41b5-a586-f6562726e1d6", 00:19:35.963 "optimal_io_boundary": 0, 00:19:35.963 "md_size": 0, 00:19:35.963 "dif_type": 0, 00:19:35.963 "dif_is_head_of_md": false, 00:19:35.963 "dif_pi_format": 0 00:19:35.963 } 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "method": "bdev_wait_for_examine" 00:19:35.963 } 00:19:35.963 ] 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "subsystem": "nbd", 00:19:35.963 "config": [] 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "subsystem": "scheduler", 00:19:35.963 "config": [ 00:19:35.963 { 00:19:35.963 "method": "framework_set_scheduler", 00:19:35.963 "params": { 00:19:35.963 "name": "static" 00:19:35.963 } 00:19:35.963 } 00:19:35.963 ] 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "subsystem": "nvmf", 00:19:35.963 "config": [ 00:19:35.963 { 00:19:35.963 "method": "nvmf_set_config", 00:19:35.963 "params": { 00:19:35.963 "discovery_filter": "match_any", 00:19:35.963 "admin_cmd_passthru": { 00:19:35.963 "identify_ctrlr": false 00:19:35.963 } 00:19:35.963 } 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "method": "nvmf_set_max_subsystems", 00:19:35.963 "params": { 00:19:35.963 "max_subsystems": 1024 00:19:35.963 } 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "method": "nvmf_set_crdt", 00:19:35.963 "params": { 00:19:35.963 "crdt1": 0, 00:19:35.963 "crdt2": 0, 00:19:35.963 "crdt3": 0 00:19:35.963 } 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "method": "nvmf_create_transport", 00:19:35.963 "params": { 00:19:35.963 "trtype": "TCP", 00:19:35.963 "max_queue_depth": 128, 00:19:35.963 "max_io_qpairs_per_ctrlr": 127, 00:19:35.963 "in_capsule_data_size": 4096, 00:19:35.963 "max_io_size": 131072, 00:19:35.963 "io_unit_size": 131072, 00:19:35.963 "max_aq_depth": 128, 00:19:35.963 "num_shared_buffers": 511, 00:19:35.963 "buf_cache_size": 4294967295, 00:19:35.963 "dif_insert_or_strip": false, 00:19:35.963 "zcopy": false, 00:19:35.963 "c2h_success": false, 00:19:35.963 "sock_priority": 0, 00:19:35.963 "abort_timeout_sec": 1, 00:19:35.963 "ack_timeout": 0, 00:19:35.963 "data_wr_pool_size": 0 00:19:35.963 } 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "method": "nvmf_create_subsystem", 00:19:35.963 "params": { 00:19:35.963 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.963 "allow_any_host": false, 00:19:35.963 "serial_number": "SPDK00000000000001", 00:19:35.963 "model_number": "SPDK bdev Controller", 00:19:35.963 "max_namespaces": 10, 00:19:35.963 "min_cntlid": 1, 00:19:35.963 "max_cntlid": 65519, 00:19:35.963 "ana_reporting": false 00:19:35.963 } 00:19:35.963 }, 00:19:35.963 { 00:19:35.963 "method": "nvmf_subsystem_add_host", 00:19:35.963 "params": { 00:19:35.963 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.964 "host": "nqn.2016-06.io.spdk:host1", 00:19:35.964 "psk": "/tmp/tmp.t8XFWvtk1m" 00:19:35.964 } 00:19:35.964 }, 00:19:35.964 { 00:19:35.964 "method": "nvmf_subsystem_add_ns", 00:19:35.964 "params": { 00:19:35.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.964 "namespace": { 00:19:35.964 "nsid": 1, 00:19:35.964 "bdev_name": "malloc0", 00:19:35.964 "nguid": "B0B7D350053B41B5A586F6562726E1D6", 00:19:35.964 "uuid": "b0b7d350-053b-41b5-a586-f6562726e1d6", 00:19:35.964 "no_auto_visible": false 00:19:35.964 } 00:19:35.964 } 00:19:35.964 }, 00:19:35.964 { 00:19:35.964 "method": "nvmf_subsystem_add_listener", 00:19:35.964 "params": { 00:19:35.964 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:35.964 "listen_address": { 00:19:35.964 "trtype": "TCP", 00:19:35.964 "adrfam": "IPv4", 00:19:35.964 "traddr": "10.0.0.2", 00:19:35.964 "trsvcid": "4420" 00:19:35.964 }, 00:19:35.964 "secure_channel": true 00:19:35.964 } 00:19:35.964 } 00:19:35.964 ] 00:19:35.964 } 00:19:35.964 ] 00:19:35.964 }' 00:19:35.964 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.964 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=360313 00:19:35.964 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 360313 00:19:35.964 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 -c /dev/fd/62 00:19:35.964 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 360313 ']' 00:19:35.964 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:35.964 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:35.964 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:35.964 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:35.964 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:35.964 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:35.964 [2024-07-25 12:05:23.163856] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:19:35.964 [2024-07-25 12:05:23.163905] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:35.964 EAL: No free 2048 kB hugepages reported on node 1 00:19:36.224 [2024-07-25 12:05:23.221971] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:36.224 [2024-07-25 12:05:23.289076] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:36.224 [2024-07-25 12:05:23.289117] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:36.224 [2024-07-25 12:05:23.289124] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:36.224 [2024-07-25 12:05:23.289130] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:36.224 [2024-07-25 12:05:23.289135] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:36.224 [2024-07-25 12:05:23.289207] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:36.483 [2024-07-25 12:05:23.491858] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:36.483 [2024-07-25 12:05:23.512554] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:36.483 [2024-07-25 12:05:23.528594] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:36.483 [2024-07-25 12:05:23.528764] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:36.741 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:36.742 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:36.742 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:36.742 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:36.742 12:05:23 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.006 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:37.006 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@207 -- # bdevperf_pid=360429 00:19:37.006 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@208 -- # waitforlisten 360429 /var/tmp/bdevperf.sock 00:19:37.006 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 360429 ']' 00:19:37.006 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:37.006 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 -c /dev/fd/63 00:19:37.006 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:37.006 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:37.006 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:37.006 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@204 -- # echo '{ 00:19:37.006 "subsystems": [ 00:19:37.006 { 00:19:37.006 "subsystem": "keyring", 00:19:37.006 "config": [] 00:19:37.006 }, 00:19:37.006 { 00:19:37.006 "subsystem": "iobuf", 00:19:37.006 "config": [ 00:19:37.006 { 00:19:37.006 "method": "iobuf_set_options", 00:19:37.006 "params": { 00:19:37.006 "small_pool_count": 8192, 00:19:37.006 "large_pool_count": 1024, 00:19:37.006 "small_bufsize": 8192, 00:19:37.006 "large_bufsize": 135168 00:19:37.006 } 00:19:37.006 } 00:19:37.006 ] 00:19:37.006 }, 00:19:37.006 { 00:19:37.006 "subsystem": "sock", 00:19:37.006 "config": [ 00:19:37.006 { 00:19:37.006 "method": "sock_set_default_impl", 00:19:37.006 "params": { 00:19:37.006 "impl_name": "posix" 00:19:37.006 } 00:19:37.006 }, 00:19:37.006 { 00:19:37.006 "method": "sock_impl_set_options", 00:19:37.006 "params": { 00:19:37.006 "impl_name": "ssl", 00:19:37.006 "recv_buf_size": 4096, 00:19:37.006 "send_buf_size": 4096, 00:19:37.006 "enable_recv_pipe": true, 00:19:37.006 "enable_quickack": false, 00:19:37.006 "enable_placement_id": 0, 00:19:37.006 "enable_zerocopy_send_server": true, 00:19:37.006 "enable_zerocopy_send_client": false, 00:19:37.006 "zerocopy_threshold": 0, 00:19:37.006 "tls_version": 0, 00:19:37.006 "enable_ktls": false 00:19:37.006 } 00:19:37.006 }, 00:19:37.006 { 00:19:37.006 "method": "sock_impl_set_options", 00:19:37.006 "params": { 00:19:37.006 "impl_name": "posix", 00:19:37.006 "recv_buf_size": 2097152, 00:19:37.006 "send_buf_size": 2097152, 00:19:37.006 "enable_recv_pipe": true, 00:19:37.006 "enable_quickack": false, 00:19:37.006 "enable_placement_id": 0, 00:19:37.006 "enable_zerocopy_send_server": true, 00:19:37.006 "enable_zerocopy_send_client": false, 00:19:37.006 "zerocopy_threshold": 0, 00:19:37.006 "tls_version": 0, 00:19:37.006 "enable_ktls": false 00:19:37.006 } 00:19:37.006 } 00:19:37.006 ] 00:19:37.006 }, 00:19:37.006 { 00:19:37.006 "subsystem": "vmd", 00:19:37.006 "config": [] 00:19:37.006 }, 00:19:37.006 { 00:19:37.006 "subsystem": "accel", 00:19:37.006 "config": [ 00:19:37.006 { 00:19:37.006 "method": "accel_set_options", 00:19:37.006 "params": { 00:19:37.006 "small_cache_size": 128, 00:19:37.006 "large_cache_size": 16, 00:19:37.006 "task_count": 2048, 00:19:37.006 "sequence_count": 2048, 00:19:37.006 "buf_count": 2048 00:19:37.006 } 00:19:37.006 } 00:19:37.006 ] 00:19:37.006 }, 00:19:37.006 { 00:19:37.006 "subsystem": "bdev", 00:19:37.006 "config": [ 00:19:37.006 { 00:19:37.006 "method": "bdev_set_options", 00:19:37.006 "params": { 00:19:37.006 "bdev_io_pool_size": 65535, 00:19:37.006 "bdev_io_cache_size": 256, 00:19:37.006 "bdev_auto_examine": true, 00:19:37.006 "iobuf_small_cache_size": 128, 00:19:37.006 "iobuf_large_cache_size": 16 00:19:37.006 } 00:19:37.006 }, 00:19:37.006 { 00:19:37.006 "method": "bdev_raid_set_options", 00:19:37.006 "params": { 00:19:37.006 "process_window_size_kb": 1024, 00:19:37.006 "process_max_bandwidth_mb_sec": 0 00:19:37.006 } 00:19:37.006 }, 00:19:37.006 { 00:19:37.006 "method": "bdev_iscsi_set_options", 00:19:37.006 "params": { 00:19:37.006 "timeout_sec": 30 00:19:37.006 } 00:19:37.006 }, 00:19:37.007 { 00:19:37.007 "method": "bdev_nvme_set_options", 00:19:37.007 "params": { 00:19:37.007 "action_on_timeout": "none", 00:19:37.007 "timeout_us": 0, 00:19:37.007 "timeout_admin_us": 0, 00:19:37.007 "keep_alive_timeout_ms": 10000, 00:19:37.007 "arbitration_burst": 0, 00:19:37.007 "low_priority_weight": 0, 00:19:37.007 "medium_priority_weight": 0, 00:19:37.007 "high_priority_weight": 0, 00:19:37.007 "nvme_adminq_poll_period_us": 10000, 00:19:37.007 "nvme_ioq_poll_period_us": 0, 00:19:37.007 "io_queue_requests": 512, 00:19:37.007 "delay_cmd_submit": true, 00:19:37.007 "transport_retry_count": 4, 00:19:37.007 "bdev_retry_count": 3, 00:19:37.007 "transport_ack_timeout": 0, 00:19:37.007 "ctrlr_loss_timeout_sec": 0, 00:19:37.007 "reconnect_delay_sec": 0, 00:19:37.007 "fast_io_fail_timeout_sec": 0, 00:19:37.007 "disable_auto_failback": false, 00:19:37.007 "generate_uuids": false, 00:19:37.007 "transport_tos": 0, 00:19:37.007 "nvme_error_stat": false, 00:19:37.007 "rdma_srq_size": 0, 00:19:37.007 "io_path_stat": false, 00:19:37.007 "allow_accel_sequence": false, 00:19:37.007 "rdma_max_cq_size": 0, 00:19:37.007 "rdma_cm_event_timeout_ms": 0, 00:19:37.007 "dhchap_digests": [ 00:19:37.007 "sha256", 00:19:37.007 "sha384", 00:19:37.007 "sha512" 00:19:37.007 ], 00:19:37.007 "dhchap_dhgroups": [ 00:19:37.007 "null", 00:19:37.007 "ffdhe2048", 00:19:37.007 "ffdhe3072", 00:19:37.007 "ffdhe4096", 00:19:37.007 "ffdhe6144", 00:19:37.007 "ffdhe8192" 00:19:37.007 ] 00:19:37.007 } 00:19:37.007 }, 00:19:37.007 { 00:19:37.007 "method": "bdev_nvme_attach_controller", 00:19:37.007 "params": { 00:19:37.007 "name": "TLSTEST", 00:19:37.007 "trtype": "TCP", 00:19:37.007 "adrfam": "IPv4", 00:19:37.007 "traddr": "10.0.0.2", 00:19:37.007 "trsvcid": "4420", 00:19:37.007 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:37.007 "prchk_reftag": false, 00:19:37.007 "prchk_guard": false, 00:19:37.007 "ctrlr_loss_timeout_sec": 0, 00:19:37.007 "reconnect_delay_sec": 0, 00:19:37.007 "fast_io_fail_timeout_sec": 0, 00:19:37.007 "psk": "/tmp/tmp.t8XFWvtk1m", 00:19:37.007 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:37.007 "hdgst": false, 00:19:37.007 "ddgst": false 00:19:37.007 } 00:19:37.007 }, 00:19:37.007 { 00:19:37.007 "method": "bdev_nvme_set_hotplug", 00:19:37.007 "params": { 00:19:37.007 "period_us": 100000, 00:19:37.007 "enable": false 00:19:37.007 } 00:19:37.007 }, 00:19:37.007 { 00:19:37.007 "method": "bdev_wait_for_examine" 00:19:37.007 } 00:19:37.007 ] 00:19:37.007 }, 00:19:37.007 { 00:19:37.007 "subsystem": "nbd", 00:19:37.007 "config": [] 00:19:37.007 } 00:19:37.007 ] 00:19:37.007 }' 00:19:37.007 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:37.007 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:37.007 [2024-07-25 12:05:24.047797] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:19:37.007 [2024-07-25 12:05:24.047849] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid360429 ] 00:19:37.007 EAL: No free 2048 kB hugepages reported on node 1 00:19:37.007 [2024-07-25 12:05:24.100114] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:37.007 [2024-07-25 12:05:24.174368] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:19:37.335 [2024-07-25 12:05:24.316247] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:37.335 [2024-07-25 12:05:24.316325] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:19:37.904 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:37.904 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:37.904 12:05:24 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@211 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 20 -s /var/tmp/bdevperf.sock perform_tests 00:19:37.904 Running I/O for 10 seconds... 00:19:47.891 00:19:47.891 Latency(us) 00:19:47.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.891 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:19:47.891 Verification LBA range: start 0x0 length 0x2000 00:19:47.891 TLSTESTn1 : 10.09 1195.30 4.67 0.00 0.00 106694.77 5983.72 165036.74 00:19:47.891 =================================================================================================================== 00:19:47.891 Total : 1195.30 4.67 0.00 0.00 106694.77 5983.72 165036.74 00:19:47.891 0 00:19:47.891 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@213 -- # trap 'nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:19:47.891 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@214 -- # killprocess 360429 00:19:47.891 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 360429 ']' 00:19:47.891 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 360429 00:19:47.891 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:47.891 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:47.891 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 360429 00:19:47.891 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:19:47.891 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:19:47.891 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 360429' 00:19:47.891 killing process with pid 360429 00:19:47.891 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 360429 00:19:47.891 Received shutdown signal, test time was about 10.000000 seconds 00:19:47.891 00:19:47.891 Latency(us) 00:19:47.891 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:47.891 =================================================================================================================== 00:19:47.891 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:47.891 [2024-07-25 12:05:35.120239] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:19:47.891 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 360429 00:19:48.150 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@215 -- # killprocess 360313 00:19:48.150 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 360313 ']' 00:19:48.150 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 360313 00:19:48.150 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:48.150 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:48.151 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 360313 00:19:48.151 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:48.151 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:48.151 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 360313' 00:19:48.151 killing process with pid 360313 00:19:48.151 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 360313 00:19:48.151 [2024-07-25 12:05:35.342265] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:48.151 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 360313 00:19:48.410 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@218 -- # nvmfappstart 00:19:48.410 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:48.410 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:48.410 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.410 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=362404 00:19:48.410 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 362404 00:19:48.410 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 362404 ']' 00:19:48.410 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:48.410 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:48.411 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:48.411 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:48.411 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:48.411 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:48.411 12:05:35 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:48.411 [2024-07-25 12:05:35.584574] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:19:48.411 [2024-07-25 12:05:35.584621] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:48.411 EAL: No free 2048 kB hugepages reported on node 1 00:19:48.411 [2024-07-25 12:05:35.642415] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:48.670 [2024-07-25 12:05:35.723726] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:48.670 [2024-07-25 12:05:35.723760] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:48.670 [2024-07-25 12:05:35.723767] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:48.670 [2024-07-25 12:05:35.723773] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:48.670 [2024-07-25 12:05:35.723779] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:48.670 [2024-07-25 12:05:35.723794] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:49.238 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:49.238 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:49.238 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:49.238 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:49.238 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:49.238 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:49.238 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@219 -- # setup_nvmf_tgt /tmp/tmp.t8XFWvtk1m 00:19:49.238 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@49 -- # local key=/tmp/tmp.t8XFWvtk1m 00:19:49.238 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@51 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:19:49.498 [2024-07-25 12:05:36.575122] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:49.498 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -s SPDK00000000000001 -m 10 00:19:49.757 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -k 00:19:49.757 [2024-07-25 12:05:36.924008] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:49.757 [2024-07-25 12:05:36.924219] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:49.757 12:05:36 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 32 4096 -b malloc0 00:19:50.016 malloc0 00:19:50.016 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 malloc0 -n 1 00:19:50.275 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode1 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.t8XFWvtk1m 00:19:50.275 [2024-07-25 12:05:37.445371] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:19:50.275 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@220 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:50.275 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@222 -- # bdevperf_pid=362668 00:19:50.275 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@224 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:19:50.275 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@225 -- # waitforlisten 362668 /var/tmp/bdevperf.sock 00:19:50.275 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 362668 ']' 00:19:50.275 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:50.275 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:50.275 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:50.275 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:50.275 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:50.275 12:05:37 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:50.275 [2024-07-25 12:05:37.506394] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:19:50.275 [2024-07-25 12:05:37.506445] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid362668 ] 00:19:50.534 EAL: No free 2048 kB hugepages reported on node 1 00:19:50.534 [2024-07-25 12:05:37.558993] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:50.534 [2024-07-25 12:05:37.633737] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:51.102 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:51.102 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:51.102 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@227 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.t8XFWvtk1m 00:19:51.362 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@228 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:51.621 [2024-07-25 12:05:38.629461] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:51.621 nvme0n1 00:19:51.621 12:05:38 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@232 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:51.621 Running I/O for 1 seconds... 00:19:53.001 00:19:53.001 Latency(us) 00:19:53.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.001 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:53.001 Verification LBA range: start 0x0 length 0x2000 00:19:53.001 nvme0n1 : 1.09 1067.50 4.17 0.00 0.00 116117.71 7066.49 150447.86 00:19:53.001 =================================================================================================================== 00:19:53.001 Total : 1067.50 4.17 0.00 0.00 116117.71 7066.49 150447.86 00:19:53.001 0 00:19:53.001 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@234 -- # killprocess 362668 00:19:53.001 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 362668 ']' 00:19:53.001 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 362668 00:19:53.001 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:53.001 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:53.001 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 362668 00:19:53.001 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:53.001 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:53.001 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 362668' 00:19:53.001 killing process with pid 362668 00:19:53.001 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 362668 00:19:53.001 Received shutdown signal, test time was about 1.000000 seconds 00:19:53.001 00:19:53.001 Latency(us) 00:19:53.001 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:53.002 =================================================================================================================== 00:19:53.002 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:53.002 12:05:39 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 362668 00:19:53.002 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@235 -- # killprocess 362404 00:19:53.002 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 362404 ']' 00:19:53.002 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 362404 00:19:53.002 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:53.002 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:53.002 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 362404 00:19:53.002 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:53.002 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:53.002 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 362404' 00:19:53.002 killing process with pid 362404 00:19:53.002 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 362404 00:19:53.002 [2024-07-25 12:05:40.190853] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:19:53.002 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 362404 00:19:53.261 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@240 -- # nvmfappstart 00:19:53.262 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:53.262 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:53.262 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.262 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=363146 00:19:53.262 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 363146 00:19:53.262 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:19:53.262 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 363146 ']' 00:19:53.262 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:53.262 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:53.262 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:53.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:53.262 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:53.262 12:05:40 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:53.262 [2024-07-25 12:05:40.433388] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:19:53.262 [2024-07-25 12:05:40.433434] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:53.262 EAL: No free 2048 kB hugepages reported on node 1 00:19:53.262 [2024-07-25 12:05:40.491868] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:53.521 [2024-07-25 12:05:40.571829] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:53.521 [2024-07-25 12:05:40.571865] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:53.521 [2024-07-25 12:05:40.571872] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:53.521 [2024-07-25 12:05:40.571877] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:53.521 [2024-07-25 12:05:40.571882] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:53.521 [2024-07-25 12:05:40.571903] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.090 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:54.090 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:54.090 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:54.090 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:54.090 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.090 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:54.090 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@241 -- # rpc_cmd 00:19:54.091 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:54.091 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.091 [2024-07-25 12:05:41.273510] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:54.091 malloc0 00:19:54.091 [2024-07-25 12:05:41.301799] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:54.091 [2024-07-25 12:05:41.309371] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:54.091 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:54.091 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@254 -- # bdevperf_pid=363388 00:19:54.091 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@256 -- # waitforlisten 363388 /var/tmp/bdevperf.sock 00:19:54.091 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 363388 ']' 00:19:54.091 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:54.091 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:54.091 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:54.091 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:54.091 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:54.091 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@252 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 00:19:54.091 12:05:41 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:54.351 [2024-07-25 12:05:41.377348] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:19:54.351 [2024-07-25 12:05:41.377399] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid363388 ] 00:19:54.351 EAL: No free 2048 kB hugepages reported on node 1 00:19:54.351 [2024-07-25 12:05:41.430353] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.351 [2024-07-25 12:05:41.503155] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:55.289 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:55.289 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:55.289 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@257 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock keyring_file_add_key key0 /tmp/tmp.t8XFWvtk1m 00:19:55.289 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@258 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 --psk key0 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 00:19:55.289 [2024-07-25 12:05:42.490974] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:55.549 nvme0n1 00:19:55.549 12:05:42 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@262 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:55.549 Running I/O for 1 seconds... 00:19:56.932 00:19:56.932 Latency(us) 00:19:56.932 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.932 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:19:56.932 Verification LBA range: start 0x0 length 0x2000 00:19:56.932 nvme0n1 : 1.07 1050.12 4.10 0.00 0.00 119158.85 6069.20 153183.28 00:19:56.932 =================================================================================================================== 00:19:56.932 Total : 1050.12 4.10 0.00 0.00 119158.85 6069.20 153183.28 00:19:56.932 0 00:19:56.932 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # rpc_cmd save_config 00:19:56.932 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@559 -- # xtrace_disable 00:19:56.932 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:56.932 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:19:56.932 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@265 -- # tgtcfg='{ 00:19:56.932 "subsystems": [ 00:19:56.932 { 00:19:56.932 "subsystem": "keyring", 00:19:56.932 "config": [ 00:19:56.932 { 00:19:56.932 "method": "keyring_file_add_key", 00:19:56.932 "params": { 00:19:56.932 "name": "key0", 00:19:56.932 "path": "/tmp/tmp.t8XFWvtk1m" 00:19:56.932 } 00:19:56.932 } 00:19:56.932 ] 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "subsystem": "iobuf", 00:19:56.932 "config": [ 00:19:56.932 { 00:19:56.932 "method": "iobuf_set_options", 00:19:56.932 "params": { 00:19:56.932 "small_pool_count": 8192, 00:19:56.932 "large_pool_count": 1024, 00:19:56.932 "small_bufsize": 8192, 00:19:56.932 "large_bufsize": 135168 00:19:56.932 } 00:19:56.932 } 00:19:56.932 ] 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "subsystem": "sock", 00:19:56.932 "config": [ 00:19:56.932 { 00:19:56.932 "method": "sock_set_default_impl", 00:19:56.932 "params": { 00:19:56.932 "impl_name": "posix" 00:19:56.932 } 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "method": "sock_impl_set_options", 00:19:56.932 "params": { 00:19:56.932 "impl_name": "ssl", 00:19:56.932 "recv_buf_size": 4096, 00:19:56.932 "send_buf_size": 4096, 00:19:56.932 "enable_recv_pipe": true, 00:19:56.932 "enable_quickack": false, 00:19:56.932 "enable_placement_id": 0, 00:19:56.932 "enable_zerocopy_send_server": true, 00:19:56.932 "enable_zerocopy_send_client": false, 00:19:56.932 "zerocopy_threshold": 0, 00:19:56.932 "tls_version": 0, 00:19:56.932 "enable_ktls": false 00:19:56.932 } 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "method": "sock_impl_set_options", 00:19:56.932 "params": { 00:19:56.932 "impl_name": "posix", 00:19:56.932 "recv_buf_size": 2097152, 00:19:56.932 "send_buf_size": 2097152, 00:19:56.932 "enable_recv_pipe": true, 00:19:56.932 "enable_quickack": false, 00:19:56.932 "enable_placement_id": 0, 00:19:56.932 "enable_zerocopy_send_server": true, 00:19:56.932 "enable_zerocopy_send_client": false, 00:19:56.932 "zerocopy_threshold": 0, 00:19:56.932 "tls_version": 0, 00:19:56.932 "enable_ktls": false 00:19:56.932 } 00:19:56.932 } 00:19:56.932 ] 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "subsystem": "vmd", 00:19:56.932 "config": [] 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "subsystem": "accel", 00:19:56.932 "config": [ 00:19:56.932 { 00:19:56.932 "method": "accel_set_options", 00:19:56.932 "params": { 00:19:56.932 "small_cache_size": 128, 00:19:56.932 "large_cache_size": 16, 00:19:56.932 "task_count": 2048, 00:19:56.932 "sequence_count": 2048, 00:19:56.932 "buf_count": 2048 00:19:56.932 } 00:19:56.932 } 00:19:56.932 ] 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "subsystem": "bdev", 00:19:56.932 "config": [ 00:19:56.932 { 00:19:56.932 "method": "bdev_set_options", 00:19:56.932 "params": { 00:19:56.932 "bdev_io_pool_size": 65535, 00:19:56.932 "bdev_io_cache_size": 256, 00:19:56.932 "bdev_auto_examine": true, 00:19:56.932 "iobuf_small_cache_size": 128, 00:19:56.932 "iobuf_large_cache_size": 16 00:19:56.932 } 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "method": "bdev_raid_set_options", 00:19:56.932 "params": { 00:19:56.932 "process_window_size_kb": 1024, 00:19:56.932 "process_max_bandwidth_mb_sec": 0 00:19:56.932 } 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "method": "bdev_iscsi_set_options", 00:19:56.932 "params": { 00:19:56.932 "timeout_sec": 30 00:19:56.932 } 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "method": "bdev_nvme_set_options", 00:19:56.932 "params": { 00:19:56.932 "action_on_timeout": "none", 00:19:56.932 "timeout_us": 0, 00:19:56.932 "timeout_admin_us": 0, 00:19:56.932 "keep_alive_timeout_ms": 10000, 00:19:56.932 "arbitration_burst": 0, 00:19:56.932 "low_priority_weight": 0, 00:19:56.932 "medium_priority_weight": 0, 00:19:56.932 "high_priority_weight": 0, 00:19:56.932 "nvme_adminq_poll_period_us": 10000, 00:19:56.932 "nvme_ioq_poll_period_us": 0, 00:19:56.932 "io_queue_requests": 0, 00:19:56.932 "delay_cmd_submit": true, 00:19:56.932 "transport_retry_count": 4, 00:19:56.932 "bdev_retry_count": 3, 00:19:56.932 "transport_ack_timeout": 0, 00:19:56.932 "ctrlr_loss_timeout_sec": 0, 00:19:56.932 "reconnect_delay_sec": 0, 00:19:56.932 "fast_io_fail_timeout_sec": 0, 00:19:56.932 "disable_auto_failback": false, 00:19:56.932 "generate_uuids": false, 00:19:56.932 "transport_tos": 0, 00:19:56.932 "nvme_error_stat": false, 00:19:56.932 "rdma_srq_size": 0, 00:19:56.932 "io_path_stat": false, 00:19:56.932 "allow_accel_sequence": false, 00:19:56.932 "rdma_max_cq_size": 0, 00:19:56.932 "rdma_cm_event_timeout_ms": 0, 00:19:56.932 "dhchap_digests": [ 00:19:56.932 "sha256", 00:19:56.932 "sha384", 00:19:56.932 "sha512" 00:19:56.932 ], 00:19:56.932 "dhchap_dhgroups": [ 00:19:56.932 "null", 00:19:56.932 "ffdhe2048", 00:19:56.932 "ffdhe3072", 00:19:56.932 "ffdhe4096", 00:19:56.932 "ffdhe6144", 00:19:56.932 "ffdhe8192" 00:19:56.932 ] 00:19:56.932 } 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "method": "bdev_nvme_set_hotplug", 00:19:56.932 "params": { 00:19:56.932 "period_us": 100000, 00:19:56.932 "enable": false 00:19:56.932 } 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "method": "bdev_malloc_create", 00:19:56.932 "params": { 00:19:56.932 "name": "malloc0", 00:19:56.932 "num_blocks": 8192, 00:19:56.932 "block_size": 4096, 00:19:56.932 "physical_block_size": 4096, 00:19:56.932 "uuid": "9bf8a612-e0c2-4cef-9e66-59f41bbfd4b1", 00:19:56.932 "optimal_io_boundary": 0, 00:19:56.932 "md_size": 0, 00:19:56.932 "dif_type": 0, 00:19:56.932 "dif_is_head_of_md": false, 00:19:56.932 "dif_pi_format": 0 00:19:56.932 } 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "method": "bdev_wait_for_examine" 00:19:56.932 } 00:19:56.932 ] 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "subsystem": "nbd", 00:19:56.932 "config": [] 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "subsystem": "scheduler", 00:19:56.932 "config": [ 00:19:56.932 { 00:19:56.932 "method": "framework_set_scheduler", 00:19:56.932 "params": { 00:19:56.932 "name": "static" 00:19:56.932 } 00:19:56.932 } 00:19:56.932 ] 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "subsystem": "nvmf", 00:19:56.932 "config": [ 00:19:56.932 { 00:19:56.932 "method": "nvmf_set_config", 00:19:56.932 "params": { 00:19:56.932 "discovery_filter": "match_any", 00:19:56.932 "admin_cmd_passthru": { 00:19:56.932 "identify_ctrlr": false 00:19:56.932 } 00:19:56.932 } 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "method": "nvmf_set_max_subsystems", 00:19:56.932 "params": { 00:19:56.932 "max_subsystems": 1024 00:19:56.932 } 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "method": "nvmf_set_crdt", 00:19:56.932 "params": { 00:19:56.932 "crdt1": 0, 00:19:56.932 "crdt2": 0, 00:19:56.932 "crdt3": 0 00:19:56.932 } 00:19:56.932 }, 00:19:56.932 { 00:19:56.932 "method": "nvmf_create_transport", 00:19:56.932 "params": { 00:19:56.933 "trtype": "TCP", 00:19:56.933 "max_queue_depth": 128, 00:19:56.933 "max_io_qpairs_per_ctrlr": 127, 00:19:56.933 "in_capsule_data_size": 4096, 00:19:56.933 "max_io_size": 131072, 00:19:56.933 "io_unit_size": 131072, 00:19:56.933 "max_aq_depth": 128, 00:19:56.933 "num_shared_buffers": 511, 00:19:56.933 "buf_cache_size": 4294967295, 00:19:56.933 "dif_insert_or_strip": false, 00:19:56.933 "zcopy": false, 00:19:56.933 "c2h_success": false, 00:19:56.933 "sock_priority": 0, 00:19:56.933 "abort_timeout_sec": 1, 00:19:56.933 "ack_timeout": 0, 00:19:56.933 "data_wr_pool_size": 0 00:19:56.933 } 00:19:56.933 }, 00:19:56.933 { 00:19:56.933 "method": "nvmf_create_subsystem", 00:19:56.933 "params": { 00:19:56.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.933 "allow_any_host": false, 00:19:56.933 "serial_number": "00000000000000000000", 00:19:56.933 "model_number": "SPDK bdev Controller", 00:19:56.933 "max_namespaces": 32, 00:19:56.933 "min_cntlid": 1, 00:19:56.933 "max_cntlid": 65519, 00:19:56.933 "ana_reporting": false 00:19:56.933 } 00:19:56.933 }, 00:19:56.933 { 00:19:56.933 "method": "nvmf_subsystem_add_host", 00:19:56.933 "params": { 00:19:56.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.933 "host": "nqn.2016-06.io.spdk:host1", 00:19:56.933 "psk": "key0" 00:19:56.933 } 00:19:56.933 }, 00:19:56.933 { 00:19:56.933 "method": "nvmf_subsystem_add_ns", 00:19:56.933 "params": { 00:19:56.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.933 "namespace": { 00:19:56.933 "nsid": 1, 00:19:56.933 "bdev_name": "malloc0", 00:19:56.933 "nguid": "9BF8A612E0C24CEF9E6659F41BBFD4B1", 00:19:56.933 "uuid": "9bf8a612-e0c2-4cef-9e66-59f41bbfd4b1", 00:19:56.933 "no_auto_visible": false 00:19:56.933 } 00:19:56.933 } 00:19:56.933 }, 00:19:56.933 { 00:19:56.933 "method": "nvmf_subsystem_add_listener", 00:19:56.933 "params": { 00:19:56.933 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.933 "listen_address": { 00:19:56.933 "trtype": "TCP", 00:19:56.933 "adrfam": "IPv4", 00:19:56.933 "traddr": "10.0.0.2", 00:19:56.933 "trsvcid": "4420" 00:19:56.933 }, 00:19:56.933 "secure_channel": false, 00:19:56.933 "sock_impl": "ssl" 00:19:56.933 } 00:19:56.933 } 00:19:56.933 ] 00:19:56.933 } 00:19:56.933 ] 00:19:56.933 }' 00:19:56.933 12:05:43 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock save_config 00:19:56.933 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@266 -- # bperfcfg='{ 00:19:56.933 "subsystems": [ 00:19:56.933 { 00:19:56.933 "subsystem": "keyring", 00:19:56.933 "config": [ 00:19:56.933 { 00:19:56.933 "method": "keyring_file_add_key", 00:19:56.933 "params": { 00:19:56.933 "name": "key0", 00:19:56.933 "path": "/tmp/tmp.t8XFWvtk1m" 00:19:56.933 } 00:19:56.933 } 00:19:56.933 ] 00:19:56.933 }, 00:19:56.933 { 00:19:56.933 "subsystem": "iobuf", 00:19:56.933 "config": [ 00:19:56.933 { 00:19:56.933 "method": "iobuf_set_options", 00:19:56.933 "params": { 00:19:56.933 "small_pool_count": 8192, 00:19:56.933 "large_pool_count": 1024, 00:19:56.933 "small_bufsize": 8192, 00:19:56.933 "large_bufsize": 135168 00:19:56.933 } 00:19:56.933 } 00:19:56.933 ] 00:19:56.933 }, 00:19:56.933 { 00:19:56.933 "subsystem": "sock", 00:19:56.933 "config": [ 00:19:56.933 { 00:19:56.933 "method": "sock_set_default_impl", 00:19:56.933 "params": { 00:19:56.933 "impl_name": "posix" 00:19:56.933 } 00:19:56.933 }, 00:19:56.933 { 00:19:56.933 "method": "sock_impl_set_options", 00:19:56.933 "params": { 00:19:56.933 "impl_name": "ssl", 00:19:56.933 "recv_buf_size": 4096, 00:19:56.933 "send_buf_size": 4096, 00:19:56.933 "enable_recv_pipe": true, 00:19:56.933 "enable_quickack": false, 00:19:56.933 "enable_placement_id": 0, 00:19:56.933 "enable_zerocopy_send_server": true, 00:19:56.933 "enable_zerocopy_send_client": false, 00:19:56.933 "zerocopy_threshold": 0, 00:19:56.933 "tls_version": 0, 00:19:56.933 "enable_ktls": false 00:19:56.933 } 00:19:56.933 }, 00:19:56.933 { 00:19:56.933 "method": "sock_impl_set_options", 00:19:56.933 "params": { 00:19:56.933 "impl_name": "posix", 00:19:56.933 "recv_buf_size": 2097152, 00:19:56.933 "send_buf_size": 2097152, 00:19:56.933 "enable_recv_pipe": true, 00:19:56.933 "enable_quickack": false, 00:19:56.933 "enable_placement_id": 0, 00:19:56.933 "enable_zerocopy_send_server": true, 00:19:56.933 "enable_zerocopy_send_client": false, 00:19:56.933 "zerocopy_threshold": 0, 00:19:56.933 "tls_version": 0, 00:19:56.933 "enable_ktls": false 00:19:56.933 } 00:19:56.933 } 00:19:56.933 ] 00:19:56.933 }, 00:19:56.933 { 00:19:56.933 "subsystem": "vmd", 00:19:56.933 "config": [] 00:19:56.933 }, 00:19:56.933 { 00:19:56.933 "subsystem": "accel", 00:19:56.933 "config": [ 00:19:56.933 { 00:19:56.933 "method": "accel_set_options", 00:19:56.933 "params": { 00:19:56.933 "small_cache_size": 128, 00:19:56.933 "large_cache_size": 16, 00:19:56.933 "task_count": 2048, 00:19:56.933 "sequence_count": 2048, 00:19:56.933 "buf_count": 2048 00:19:56.933 } 00:19:56.933 } 00:19:56.933 ] 00:19:56.933 }, 00:19:56.933 { 00:19:56.933 "subsystem": "bdev", 00:19:56.933 "config": [ 00:19:56.933 { 00:19:56.933 "method": "bdev_set_options", 00:19:56.933 "params": { 00:19:56.933 "bdev_io_pool_size": 65535, 00:19:56.933 "bdev_io_cache_size": 256, 00:19:56.933 "bdev_auto_examine": true, 00:19:56.933 "iobuf_small_cache_size": 128, 00:19:56.933 "iobuf_large_cache_size": 16 00:19:56.933 } 00:19:56.933 }, 00:19:56.933 { 00:19:56.933 "method": "bdev_raid_set_options", 00:19:56.933 "params": { 00:19:56.933 "process_window_size_kb": 1024, 00:19:56.933 "process_max_bandwidth_mb_sec": 0 00:19:56.933 } 00:19:56.933 }, 00:19:56.933 { 00:19:56.933 "method": "bdev_iscsi_set_options", 00:19:56.933 "params": { 00:19:56.933 "timeout_sec": 30 00:19:56.933 } 00:19:56.933 }, 00:19:56.933 { 00:19:56.933 "method": "bdev_nvme_set_options", 00:19:56.933 "params": { 00:19:56.933 "action_on_timeout": "none", 00:19:56.933 "timeout_us": 0, 00:19:56.933 "timeout_admin_us": 0, 00:19:56.933 "keep_alive_timeout_ms": 10000, 00:19:56.933 "arbitration_burst": 0, 00:19:56.933 "low_priority_weight": 0, 00:19:56.933 "medium_priority_weight": 0, 00:19:56.933 "high_priority_weight": 0, 00:19:56.933 "nvme_adminq_poll_period_us": 10000, 00:19:56.933 "nvme_ioq_poll_period_us": 0, 00:19:56.933 "io_queue_requests": 512, 00:19:56.933 "delay_cmd_submit": true, 00:19:56.933 "transport_retry_count": 4, 00:19:56.933 "bdev_retry_count": 3, 00:19:56.933 "transport_ack_timeout": 0, 00:19:56.933 "ctrlr_loss_timeout_sec": 0, 00:19:56.933 "reconnect_delay_sec": 0, 00:19:56.933 "fast_io_fail_timeout_sec": 0, 00:19:56.933 "disable_auto_failback": false, 00:19:56.933 "generate_uuids": false, 00:19:56.933 "transport_tos": 0, 00:19:56.933 "nvme_error_stat": false, 00:19:56.933 "rdma_srq_size": 0, 00:19:56.933 "io_path_stat": false, 00:19:56.933 "allow_accel_sequence": false, 00:19:56.933 "rdma_max_cq_size": 0, 00:19:56.933 "rdma_cm_event_timeout_ms": 0, 00:19:56.933 "dhchap_digests": [ 00:19:56.933 "sha256", 00:19:56.933 "sha384", 00:19:56.933 "sha512" 00:19:56.933 ], 00:19:56.933 "dhchap_dhgroups": [ 00:19:56.933 "null", 00:19:56.933 "ffdhe2048", 00:19:56.933 "ffdhe3072", 00:19:56.933 "ffdhe4096", 00:19:56.933 "ffdhe6144", 00:19:56.933 "ffdhe8192" 00:19:56.933 ] 00:19:56.933 } 00:19:56.933 }, 00:19:56.933 { 00:19:56.933 "method": "bdev_nvme_attach_controller", 00:19:56.933 "params": { 00:19:56.933 "name": "nvme0", 00:19:56.933 "trtype": "TCP", 00:19:56.933 "adrfam": "IPv4", 00:19:56.933 "traddr": "10.0.0.2", 00:19:56.933 "trsvcid": "4420", 00:19:56.933 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:56.933 "prchk_reftag": false, 00:19:56.934 "prchk_guard": false, 00:19:56.934 "ctrlr_loss_timeout_sec": 0, 00:19:56.934 "reconnect_delay_sec": 0, 00:19:56.934 "fast_io_fail_timeout_sec": 0, 00:19:56.934 "psk": "key0", 00:19:56.934 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:56.934 "hdgst": false, 00:19:56.934 "ddgst": false 00:19:56.934 } 00:19:56.934 }, 00:19:56.934 { 00:19:56.934 "method": "bdev_nvme_set_hotplug", 00:19:56.934 "params": { 00:19:56.934 "period_us": 100000, 00:19:56.934 "enable": false 00:19:56.934 } 00:19:56.934 }, 00:19:56.934 { 00:19:56.934 "method": "bdev_enable_histogram", 00:19:56.934 "params": { 00:19:56.934 "name": "nvme0n1", 00:19:56.934 "enable": true 00:19:56.934 } 00:19:56.934 }, 00:19:56.934 { 00:19:56.934 "method": "bdev_wait_for_examine" 00:19:56.934 } 00:19:56.934 ] 00:19:56.934 }, 00:19:56.934 { 00:19:56.934 "subsystem": "nbd", 00:19:56.934 "config": [] 00:19:56.934 } 00:19:56.934 ] 00:19:56.934 }' 00:19:56.934 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@268 -- # killprocess 363388 00:19:56.934 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 363388 ']' 00:19:56.934 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 363388 00:19:56.934 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:56.934 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:56.934 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 363388 00:19:56.934 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:19:56.934 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:19:56.934 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 363388' 00:19:56.934 killing process with pid 363388 00:19:56.934 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 363388 00:19:56.934 Received shutdown signal, test time was about 1.000000 seconds 00:19:56.934 00:19:56.934 Latency(us) 00:19:56.934 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:56.934 =================================================================================================================== 00:19:56.934 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:56.934 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 363388 00:19:57.194 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@269 -- # killprocess 363146 00:19:57.194 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 363146 ']' 00:19:57.194 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 363146 00:19:57.194 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:19:57.194 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:19:57.194 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 363146 00:19:57.194 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:19:57.194 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:19:57.194 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 363146' 00:19:57.194 killing process with pid 363146 00:19:57.194 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 363146 00:19:57.194 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 363146 00:19:57.454 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # nvmfappstart -c /dev/fd/62 00:19:57.454 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:19:57.454 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@722 -- # xtrace_disable 00:19:57.454 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@271 -- # echo '{ 00:19:57.454 "subsystems": [ 00:19:57.454 { 00:19:57.454 "subsystem": "keyring", 00:19:57.454 "config": [ 00:19:57.454 { 00:19:57.454 "method": "keyring_file_add_key", 00:19:57.454 "params": { 00:19:57.454 "name": "key0", 00:19:57.454 "path": "/tmp/tmp.t8XFWvtk1m" 00:19:57.454 } 00:19:57.454 } 00:19:57.454 ] 00:19:57.454 }, 00:19:57.454 { 00:19:57.454 "subsystem": "iobuf", 00:19:57.454 "config": [ 00:19:57.454 { 00:19:57.454 "method": "iobuf_set_options", 00:19:57.454 "params": { 00:19:57.454 "small_pool_count": 8192, 00:19:57.454 "large_pool_count": 1024, 00:19:57.454 "small_bufsize": 8192, 00:19:57.454 "large_bufsize": 135168 00:19:57.454 } 00:19:57.454 } 00:19:57.454 ] 00:19:57.454 }, 00:19:57.454 { 00:19:57.454 "subsystem": "sock", 00:19:57.454 "config": [ 00:19:57.454 { 00:19:57.454 "method": "sock_set_default_impl", 00:19:57.454 "params": { 00:19:57.454 "impl_name": "posix" 00:19:57.454 } 00:19:57.454 }, 00:19:57.454 { 00:19:57.454 "method": "sock_impl_set_options", 00:19:57.454 "params": { 00:19:57.454 "impl_name": "ssl", 00:19:57.454 "recv_buf_size": 4096, 00:19:57.454 "send_buf_size": 4096, 00:19:57.454 "enable_recv_pipe": true, 00:19:57.454 "enable_quickack": false, 00:19:57.454 "enable_placement_id": 0, 00:19:57.454 "enable_zerocopy_send_server": true, 00:19:57.454 "enable_zerocopy_send_client": false, 00:19:57.454 "zerocopy_threshold": 0, 00:19:57.454 "tls_version": 0, 00:19:57.454 "enable_ktls": false 00:19:57.454 } 00:19:57.454 }, 00:19:57.454 { 00:19:57.454 "method": "sock_impl_set_options", 00:19:57.454 "params": { 00:19:57.454 "impl_name": "posix", 00:19:57.454 "recv_buf_size": 2097152, 00:19:57.454 "send_buf_size": 2097152, 00:19:57.454 "enable_recv_pipe": true, 00:19:57.454 "enable_quickack": false, 00:19:57.454 "enable_placement_id": 0, 00:19:57.454 "enable_zerocopy_send_server": true, 00:19:57.454 "enable_zerocopy_send_client": false, 00:19:57.454 "zerocopy_threshold": 0, 00:19:57.454 "tls_version": 0, 00:19:57.454 "enable_ktls": false 00:19:57.454 } 00:19:57.454 } 00:19:57.454 ] 00:19:57.454 }, 00:19:57.454 { 00:19:57.454 "subsystem": "vmd", 00:19:57.454 "config": [] 00:19:57.454 }, 00:19:57.454 { 00:19:57.454 "subsystem": "accel", 00:19:57.454 "config": [ 00:19:57.454 { 00:19:57.454 "method": "accel_set_options", 00:19:57.454 "params": { 00:19:57.454 "small_cache_size": 128, 00:19:57.454 "large_cache_size": 16, 00:19:57.454 "task_count": 2048, 00:19:57.454 "sequence_count": 2048, 00:19:57.454 "buf_count": 2048 00:19:57.454 } 00:19:57.454 } 00:19:57.454 ] 00:19:57.454 }, 00:19:57.454 { 00:19:57.454 "subsystem": "bdev", 00:19:57.454 "config": [ 00:19:57.454 { 00:19:57.454 "method": "bdev_set_options", 00:19:57.454 "params": { 00:19:57.454 "bdev_io_pool_size": 65535, 00:19:57.454 "bdev_io_cache_size": 256, 00:19:57.454 "bdev_auto_examine": true, 00:19:57.454 "iobuf_small_cache_size": 128, 00:19:57.454 "iobuf_large_cache_size": 16 00:19:57.454 } 00:19:57.454 }, 00:19:57.454 { 00:19:57.454 "method": "bdev_raid_set_options", 00:19:57.454 "params": { 00:19:57.454 "process_window_size_kb": 1024, 00:19:57.454 "process_max_bandwidth_mb_sec": 0 00:19:57.454 } 00:19:57.454 }, 00:19:57.454 { 00:19:57.454 "method": "bdev_iscsi_set_options", 00:19:57.454 "params": { 00:19:57.454 "timeout_sec": 30 00:19:57.454 } 00:19:57.454 }, 00:19:57.454 { 00:19:57.454 "method": "bdev_nvme_set_options", 00:19:57.454 "params": { 00:19:57.454 "action_on_timeout": "none", 00:19:57.454 "timeout_us": 0, 00:19:57.454 "timeout_admin_us": 0, 00:19:57.454 "keep_alive_timeout_ms": 10000, 00:19:57.454 "arbitration_burst": 0, 00:19:57.454 "low_priority_weight": 0, 00:19:57.454 "medium_priority_weight": 0, 00:19:57.454 "high_priority_weight": 0, 00:19:57.455 "nvme_adminq_poll_period_us": 10000, 00:19:57.455 "nvme_ioq_poll_period_us": 0, 00:19:57.455 "io_queue_requests": 0, 00:19:57.455 "delay_cmd_submit": true, 00:19:57.455 "transport_retry_count": 4, 00:19:57.455 "bdev_retry_count": 3, 00:19:57.455 "transport_ack_timeout": 0, 00:19:57.455 "ctrlr_loss_timeout_sec": 0, 00:19:57.455 "reconnect_delay_sec": 0, 00:19:57.455 "fast_io_fail_timeout_sec": 0, 00:19:57.455 "disable_auto_failback": false, 00:19:57.455 "generate_uuids": false, 00:19:57.455 "transport_tos": 0, 00:19:57.455 "nvme_error_stat": false, 00:19:57.455 "rdma_srq_size": 0, 00:19:57.455 "io_path_stat": false, 00:19:57.455 "allow_accel_sequence": false, 00:19:57.455 "rdma_max_cq_size": 0, 00:19:57.455 "rdma_cm_event_timeout_ms": 0, 00:19:57.455 "dhchap_digests": [ 00:19:57.455 "sha256", 00:19:57.455 "sha384", 00:19:57.455 "sha512" 00:19:57.455 ], 00:19:57.455 "dhchap_dhgroups": [ 00:19:57.455 "null", 00:19:57.455 "ffdhe2048", 00:19:57.455 "ffdhe3072", 00:19:57.455 "ffdhe4096", 00:19:57.455 "ffdhe6144", 00:19:57.455 "ffdhe8192" 00:19:57.455 ] 00:19:57.455 } 00:19:57.455 }, 00:19:57.455 { 00:19:57.455 "method": "bdev_nvme_set_hotplug", 00:19:57.455 "params": { 00:19:57.455 "period_us": 100000, 00:19:57.455 "enable": false 00:19:57.455 } 00:19:57.455 }, 00:19:57.455 { 00:19:57.455 "method": "bdev_malloc_create", 00:19:57.455 "params": { 00:19:57.455 "name": "malloc0", 00:19:57.455 "num_blocks": 8192, 00:19:57.455 "block_size": 4096, 00:19:57.455 "physical_block_size": 4096, 00:19:57.455 "uuid": "9bf8a612-e0c2-4cef-9e66-59f41bbfd4b1", 00:19:57.455 "optimal_io_boundary": 0, 00:19:57.455 "md_size": 0, 00:19:57.455 "dif_type": 0, 00:19:57.455 "dif_is_head_of_md": false, 00:19:57.455 "dif_pi_format": 0 00:19:57.455 } 00:19:57.455 }, 00:19:57.455 { 00:19:57.455 "method": "bdev_wait_for_examine" 00:19:57.455 } 00:19:57.455 ] 00:19:57.455 }, 00:19:57.455 { 00:19:57.455 "subsystem": "nbd", 00:19:57.455 "config": [] 00:19:57.455 }, 00:19:57.455 { 00:19:57.455 "subsystem": "scheduler", 00:19:57.455 "config": [ 00:19:57.455 { 00:19:57.455 "method": "framework_set_scheduler", 00:19:57.455 "params": { 00:19:57.455 "name": "static" 00:19:57.455 } 00:19:57.455 } 00:19:57.455 ] 00:19:57.455 }, 00:19:57.455 { 00:19:57.455 "subsystem": "nvmf", 00:19:57.455 "config": [ 00:19:57.455 { 00:19:57.455 "method": "nvmf_set_config", 00:19:57.455 "params": { 00:19:57.455 "discovery_filter": "match_any", 00:19:57.455 "admin_cmd_passthru": { 00:19:57.455 "identify_ctrlr": false 00:19:57.455 } 00:19:57.455 } 00:19:57.455 }, 00:19:57.455 { 00:19:57.455 "method": "nvmf_set_max_subsystems", 00:19:57.455 "params": { 00:19:57.455 "max_subsystems": 1024 00:19:57.455 } 00:19:57.455 }, 00:19:57.455 { 00:19:57.455 "method": "nvmf_set_crdt", 00:19:57.455 "params": { 00:19:57.455 "crdt1": 0, 00:19:57.455 "crdt2": 0, 00:19:57.455 "crdt3": 0 00:19:57.455 } 00:19:57.455 }, 00:19:57.455 { 00:19:57.455 "method": "nvmf_create_transport", 00:19:57.455 "params": { 00:19:57.455 "trtype": "TCP", 00:19:57.455 "max_queue_depth": 128, 00:19:57.455 "max_io_qpairs_per_ctrlr": 127, 00:19:57.455 "in_capsule_data_size": 4096, 00:19:57.455 "max_io_size": 131072, 00:19:57.455 "io_unit_size": 131072, 00:19:57.455 "max_aq_depth": 128, 00:19:57.455 "num_shared_buffers": 511, 00:19:57.455 "buf_cache_size": 4294967295, 00:19:57.455 "dif_insert_or_strip": false, 00:19:57.455 "zcopy": false, 00:19:57.455 "c2h_success": false, 00:19:57.455 "sock_priority": 0, 00:19:57.455 "abort_timeout_sec": 1, 00:19:57.455 "ack_timeout": 0, 00:19:57.455 "data_wr_pool_size": 0 00:19:57.455 } 00:19:57.455 }, 00:19:57.455 { 00:19:57.455 "method": "nvmf_create_subsystem", 00:19:57.455 "params": { 00:19:57.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.455 "allow_any_host": false, 00:19:57.455 "serial_number": "00000000000000000000", 00:19:57.455 "model_number": "SPDK bdev Controller", 00:19:57.455 "max_namespaces": 32, 00:19:57.455 "min_cntlid": 1, 00:19:57.455 "max_cntlid": 65519, 00:19:57.455 "ana_reporting": false 00:19:57.455 } 00:19:57.455 }, 00:19:57.455 { 00:19:57.455 "method": "nvmf_subsystem_add_host", 00:19:57.455 "params": { 00:19:57.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.455 "host": "nqn.2016-06.io.spdk:host1", 00:19:57.455 "psk": "key0" 00:19:57.455 } 00:19:57.455 }, 00:19:57.455 { 00:19:57.455 "method": "nvmf_subsystem_add_ns", 00:19:57.455 "params": { 00:19:57.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.455 "namespace": { 00:19:57.455 "nsid": 1, 00:19:57.455 "bdev_name": "malloc0", 00:19:57.455 "nguid": "9BF8A612E0C24CEF9E6659F41BBFD4B1", 00:19:57.455 "uuid": "9bf8a612-e0c2-4cef-9e66-59f41bbfd4b1", 00:19:57.455 "no_auto_visible": false 00:19:57.455 } 00:19:57.455 } 00:19:57.455 }, 00:19:57.455 { 00:19:57.455 "method": "nvmf_subsystem_add_listener", 00:19:57.455 "params": { 00:19:57.455 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:19:57.455 "listen_address": { 00:19:57.455 "trtype": "TCP", 00:19:57.455 "adrfam": "IPv4", 00:19:57.455 "traddr": "10.0.0.2", 00:19:57.455 "trsvcid": "4420" 00:19:57.455 }, 00:19:57.455 "secure_channel": false, 00:19:57.455 "sock_impl": "ssl" 00:19:57.455 } 00:19:57.455 } 00:19:57.455 ] 00:19:57.455 } 00:19:57.455 ] 00:19:57.455 }' 00:19:57.455 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.455 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@481 -- # nvmfpid=363870 00:19:57.455 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -c /dev/fd/62 00:19:57.455 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@482 -- # waitforlisten 363870 00:19:57.455 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 363870 ']' 00:19:57.455 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:57.455 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:57.455 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:57.455 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:57.455 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:57.455 12:05:44 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:57.455 [2024-07-25 12:05:44.645218] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:19:57.455 [2024-07-25 12:05:44.645266] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:19:57.455 EAL: No free 2048 kB hugepages reported on node 1 00:19:57.455 [2024-07-25 12:05:44.701209] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.715 [2024-07-25 12:05:44.779670] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:19:57.715 [2024-07-25 12:05:44.779709] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:19:57.715 [2024-07-25 12:05:44.779716] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:19:57.715 [2024-07-25 12:05:44.779722] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:19:57.715 [2024-07-25 12:05:44.779728] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:19:57.715 [2024-07-25 12:05:44.779776] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:19:57.974 [2024-07-25 12:05:44.990928] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:19:57.974 [2024-07-25 12:05:45.031224] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:19:57.974 [2024-07-25 12:05:45.031414] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:19:58.233 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:58.233 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:58.233 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:19:58.233 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@728 -- # xtrace_disable 00:19:58.233 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.494 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:19:58.494 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@274 -- # bdevperf_pid=364115 00:19:58.494 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@275 -- # waitforlisten 364115 /var/tmp/bdevperf.sock 00:19:58.494 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@829 -- # '[' -z 364115 ']' 00:19:58.494 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:19:58.494 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -z -r /var/tmp/bdevperf.sock -q 128 -o 4k -w verify -t 1 -c /dev/fd/63 00:19:58.494 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@834 -- # local max_retries=100 00:19:58.494 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:19:58.494 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:19:58.494 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@272 -- # echo '{ 00:19:58.494 "subsystems": [ 00:19:58.494 { 00:19:58.494 "subsystem": "keyring", 00:19:58.494 "config": [ 00:19:58.494 { 00:19:58.494 "method": "keyring_file_add_key", 00:19:58.494 "params": { 00:19:58.494 "name": "key0", 00:19:58.494 "path": "/tmp/tmp.t8XFWvtk1m" 00:19:58.494 } 00:19:58.494 } 00:19:58.494 ] 00:19:58.494 }, 00:19:58.494 { 00:19:58.494 "subsystem": "iobuf", 00:19:58.494 "config": [ 00:19:58.494 { 00:19:58.494 "method": "iobuf_set_options", 00:19:58.494 "params": { 00:19:58.494 "small_pool_count": 8192, 00:19:58.494 "large_pool_count": 1024, 00:19:58.494 "small_bufsize": 8192, 00:19:58.494 "large_bufsize": 135168 00:19:58.494 } 00:19:58.494 } 00:19:58.494 ] 00:19:58.494 }, 00:19:58.494 { 00:19:58.494 "subsystem": "sock", 00:19:58.494 "config": [ 00:19:58.494 { 00:19:58.494 "method": "sock_set_default_impl", 00:19:58.494 "params": { 00:19:58.494 "impl_name": "posix" 00:19:58.494 } 00:19:58.494 }, 00:19:58.494 { 00:19:58.494 "method": "sock_impl_set_options", 00:19:58.494 "params": { 00:19:58.494 "impl_name": "ssl", 00:19:58.494 "recv_buf_size": 4096, 00:19:58.494 "send_buf_size": 4096, 00:19:58.494 "enable_recv_pipe": true, 00:19:58.494 "enable_quickack": false, 00:19:58.494 "enable_placement_id": 0, 00:19:58.494 "enable_zerocopy_send_server": true, 00:19:58.494 "enable_zerocopy_send_client": false, 00:19:58.494 "zerocopy_threshold": 0, 00:19:58.494 "tls_version": 0, 00:19:58.494 "enable_ktls": false 00:19:58.494 } 00:19:58.494 }, 00:19:58.494 { 00:19:58.494 "method": "sock_impl_set_options", 00:19:58.494 "params": { 00:19:58.494 "impl_name": "posix", 00:19:58.494 "recv_buf_size": 2097152, 00:19:58.494 "send_buf_size": 2097152, 00:19:58.494 "enable_recv_pipe": true, 00:19:58.494 "enable_quickack": false, 00:19:58.494 "enable_placement_id": 0, 00:19:58.494 "enable_zerocopy_send_server": true, 00:19:58.494 "enable_zerocopy_send_client": false, 00:19:58.494 "zerocopy_threshold": 0, 00:19:58.494 "tls_version": 0, 00:19:58.494 "enable_ktls": false 00:19:58.494 } 00:19:58.494 } 00:19:58.494 ] 00:19:58.494 }, 00:19:58.494 { 00:19:58.494 "subsystem": "vmd", 00:19:58.494 "config": [] 00:19:58.494 }, 00:19:58.494 { 00:19:58.494 "subsystem": "accel", 00:19:58.494 "config": [ 00:19:58.494 { 00:19:58.494 "method": "accel_set_options", 00:19:58.494 "params": { 00:19:58.494 "small_cache_size": 128, 00:19:58.494 "large_cache_size": 16, 00:19:58.494 "task_count": 2048, 00:19:58.494 "sequence_count": 2048, 00:19:58.494 "buf_count": 2048 00:19:58.494 } 00:19:58.494 } 00:19:58.494 ] 00:19:58.494 }, 00:19:58.494 { 00:19:58.494 "subsystem": "bdev", 00:19:58.494 "config": [ 00:19:58.494 { 00:19:58.494 "method": "bdev_set_options", 00:19:58.494 "params": { 00:19:58.494 "bdev_io_pool_size": 65535, 00:19:58.494 "bdev_io_cache_size": 256, 00:19:58.494 "bdev_auto_examine": true, 00:19:58.494 "iobuf_small_cache_size": 128, 00:19:58.494 "iobuf_large_cache_size": 16 00:19:58.494 } 00:19:58.494 }, 00:19:58.494 { 00:19:58.494 "method": "bdev_raid_set_options", 00:19:58.494 "params": { 00:19:58.494 "process_window_size_kb": 1024, 00:19:58.494 "process_max_bandwidth_mb_sec": 0 00:19:58.494 } 00:19:58.494 }, 00:19:58.494 { 00:19:58.494 "method": "bdev_iscsi_set_options", 00:19:58.495 "params": { 00:19:58.495 "timeout_sec": 30 00:19:58.495 } 00:19:58.495 }, 00:19:58.495 { 00:19:58.495 "method": "bdev_nvme_set_options", 00:19:58.495 "params": { 00:19:58.495 "action_on_timeout": "none", 00:19:58.495 "timeout_us": 0, 00:19:58.495 "timeout_admin_us": 0, 00:19:58.495 "keep_alive_timeout_ms": 10000, 00:19:58.495 "arbitration_burst": 0, 00:19:58.495 "low_priority_weight": 0, 00:19:58.495 "medium_priority_weight": 0, 00:19:58.495 "high_priority_weight": 0, 00:19:58.495 "nvme_adminq_poll_period_us": 10000, 00:19:58.495 "nvme_ioq_poll_period_us": 0, 00:19:58.495 "io_queue_requests": 512, 00:19:58.495 "delay_cmd_submit": true, 00:19:58.495 "transport_retry_count": 4, 00:19:58.495 "bdev_retry_count": 3, 00:19:58.495 "transport_ack_timeout": 0, 00:19:58.495 "ctrlr_loss_timeout_sec": 0, 00:19:58.495 "reconnect_delay_sec": 0, 00:19:58.495 "fast_io_fail_timeout_sec": 0, 00:19:58.495 "disable_auto_failback": false, 00:19:58.495 "generate_uuids": false, 00:19:58.495 "transport_tos": 0, 00:19:58.495 "nvme_error_stat": false, 00:19:58.495 "rdma_srq_size": 0, 00:19:58.495 "io_path_stat": false, 00:19:58.495 "allow_accel_sequence": false, 00:19:58.495 "rdma_max_cq_size": 0, 00:19:58.495 "rdma_cm_event_timeout_ms": 0, 00:19:58.495 "dhchap_digests": [ 00:19:58.495 "sha256", 00:19:58.495 "sha384", 00:19:58.495 "sha512" 00:19:58.495 ], 00:19:58.495 "dhchap_dhgroups": [ 00:19:58.495 "null", 00:19:58.495 "ffdhe2048", 00:19:58.495 "ffdhe3072", 00:19:58.495 "ffdhe4096", 00:19:58.495 "ffdhe6144", 00:19:58.495 "ffdhe8192" 00:19:58.495 ] 00:19:58.495 } 00:19:58.495 }, 00:19:58.495 { 00:19:58.495 "method": "bdev_nvme_attach_controller", 00:19:58.495 "params": { 00:19:58.495 "name": "nvme0", 00:19:58.495 "trtype": "TCP", 00:19:58.495 "adrfam": "IPv4", 00:19:58.495 "traddr": "10.0.0.2", 00:19:58.495 "trsvcid": "4420", 00:19:58.495 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:19:58.495 "prchk_reftag": false, 00:19:58.495 "prchk_guard": false, 00:19:58.495 "ctrlr_loss_timeout_sec": 0, 00:19:58.495 "reconnect_delay_sec": 0, 00:19:58.495 "fast_io_fail_timeout_sec": 0, 00:19:58.495 "psk": "key0", 00:19:58.495 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:19:58.495 "hdgst": false, 00:19:58.495 "ddgst": false 00:19:58.495 } 00:19:58.495 }, 00:19:58.495 { 00:19:58.495 "method": "bdev_nvme_set_hotplug", 00:19:58.495 "params": { 00:19:58.495 "period_us": 100000, 00:19:58.495 "enable": false 00:19:58.495 } 00:19:58.495 }, 00:19:58.495 { 00:19:58.495 "method": "bdev_enable_histogram", 00:19:58.495 "params": { 00:19:58.495 "name": "nvme0n1", 00:19:58.495 "enable": true 00:19:58.495 } 00:19:58.495 }, 00:19:58.495 { 00:19:58.495 "method": "bdev_wait_for_examine" 00:19:58.495 } 00:19:58.495 ] 00:19:58.495 }, 00:19:58.495 { 00:19:58.495 "subsystem": "nbd", 00:19:58.495 "config": [] 00:19:58.495 } 00:19:58.495 ] 00:19:58.495 }' 00:19:58.495 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@838 -- # xtrace_disable 00:19:58.495 12:05:45 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:19:58.495 [2024-07-25 12:05:45.534305] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:19:58.495 [2024-07-25 12:05:45.534352] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid364115 ] 00:19:58.495 EAL: No free 2048 kB hugepages reported on node 1 00:19:58.495 [2024-07-25 12:05:45.587075] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:58.495 [2024-07-25 12:05:45.661884] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:19:58.755 [2024-07-25 12:05:45.813264] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:19:59.355 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:19:59.355 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@862 -- # return 0 00:19:59.355 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:19:59.355 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # jq -r '.[].name' 00:19:59.355 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@277 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:19:59.355 12:05:46 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@278 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:19:59.355 Running I/O for 1 seconds... 00:20:00.734 00:20:00.735 Latency(us) 00:20:00.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.735 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:20:00.735 Verification LBA range: start 0x0 length 0x2000 00:20:00.735 nvme0n1 : 1.08 1023.19 4.00 0.00 0.00 121565.69 7180.47 159565.91 00:20:00.735 =================================================================================================================== 00:20:00.735 Total : 1023.19 4.00 0.00 0.00 121565.69 7180.47 159565.91 00:20:00.735 0 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@280 -- # trap - SIGINT SIGTERM EXIT 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@281 -- # cleanup 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@15 -- # process_shm --id 0 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@806 -- # type=--id 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@807 -- # id=0 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:00.735 nvmf_trace.0 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@821 -- # return 0 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@16 -- # killprocess 364115 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 364115 ']' 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 364115 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 364115 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 364115' 00:20:00.735 killing process with pid 364115 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 364115 00:20:00.735 Received shutdown signal, test time was about 1.000000 seconds 00:20:00.735 00:20:00.735 Latency(us) 00:20:00.735 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:00.735 =================================================================================================================== 00:20:00.735 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:00.735 12:05:47 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 364115 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@17 -- # nvmftestfini 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@117 -- # sync 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@120 -- # set +e 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:00.994 rmmod nvme_tcp 00:20:00.994 rmmod nvme_fabrics 00:20:00.994 rmmod nvme_keyring 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@124 -- # set -e 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@125 -- # return 0 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@489 -- # '[' -n 363870 ']' 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@490 -- # killprocess 363870 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@948 -- # '[' -z 363870 ']' 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@952 -- # kill -0 363870 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # uname 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 363870 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:00.994 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@966 -- # echo 'killing process with pid 363870' 00:20:00.995 killing process with pid 363870 00:20:00.995 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@967 -- # kill 363870 00:20:00.995 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@972 -- # wait 363870 00:20:01.254 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:01.254 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:01.254 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:01.254 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:01.254 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:01.254 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:01.254 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:01.254 12:05:48 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.182 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:03.182 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- target/tls.sh@18 -- # rm -f /tmp/tmp.FZbANw7H6Y /tmp/tmp.pZ5hTSuDrV /tmp/tmp.t8XFWvtk1m 00:20:03.182 00:20:03.182 real 1m24.958s 00:20:03.182 user 2m14.208s 00:20:03.182 sys 0m26.247s 00:20:03.182 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:03.182 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_tls -- common/autotest_common.sh@10 -- # set +x 00:20:03.182 ************************************ 00:20:03.182 END TEST nvmf_tls 00:20:03.182 ************************************ 00:20:03.182 12:05:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:03.182 12:05:50 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@42 -- # run_test nvmf_fips /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:03.182 12:05:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:03.182 12:05:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:03.182 12:05:50 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:03.182 ************************************ 00:20:03.182 START TEST nvmf_fips 00:20:03.182 ************************************ 00:20:03.182 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/fips.sh --transport=tcp 00:20:03.442 * Looking for test storage... 00:20:03.442 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips 00:20:03.442 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:03.442 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # uname -s 00:20:03.442 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:03.442 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:03.442 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:03.442 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:03.442 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:03.442 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:03.442 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:03.442 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:03.442 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:03.442 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:03.442 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:03.442 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:03.442 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:03.442 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@5 -- # export PATH 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@47 -- # : 0 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@89 -- # check_openssl_version 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@83 -- # local target=3.0.0 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # openssl version 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # awk '{print $2}' 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@85 -- # ge 3.0.9 3.0.0 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@373 -- # cmp_versions 3.0.9 '>=' 3.0.0 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@330 -- # local ver1 ver1_l 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@331 -- # local ver2 ver2_l 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # IFS=.-: 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@333 -- # read -ra ver1 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # IFS=.-: 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@334 -- # read -ra ver2 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@335 -- # local 'op=>=' 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@337 -- # ver1_l=3 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@338 -- # ver2_l=3 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@340 -- # local lt=0 gt=0 eq=0 v 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@341 -- # case "$op" in 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@345 -- # : 1 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v = 0 )) 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 3 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=3 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 3 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=3 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 3 =~ ^[0-9]+$ ]] 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 3 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=3 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 0 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=0 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@365 -- # (( ver1[v] < ver2[v] )) 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v++ )) 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@361 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # decimal 9 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=9 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 9 =~ ^[0-9]+$ ]] 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 9 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@362 -- # ver1[v]=9 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # decimal 0 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@350 -- # local d=0 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@351 -- # [[ 0 =~ ^[0-9]+$ ]] 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@352 -- # echo 0 00:20:03.443 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@363 -- # ver2[v]=0 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # (( ver1[v] > ver2[v] )) 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- scripts/common.sh@364 -- # return 0 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # openssl info -modulesdir 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@95 -- # [[ ! -f /usr/lib64/ossl-modules/fips.so ]] 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # openssl fipsinstall -help 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@100 -- # warn='This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode' 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@101 -- # [[ This command is not enabled in the Red Hat Enterprise Linux OpenSSL build, please consult Red Hat documentation to learn how to enable FIPS mode == \T\h\i\s\ \c\o\m\m\a\n\d\ \i\s\ \n\o\t\ \e\n\a\b\l\e\d* ]] 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # export callback=build_openssl_config 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@104 -- # callback=build_openssl_config 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@113 -- # build_openssl_config 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@37 -- # cat 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@57 -- # [[ ! -t 0 ]] 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@58 -- # cat - 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # export OPENSSL_CONF=spdk_fips.conf 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@114 -- # OPENSSL_CONF=spdk_fips.conf 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # mapfile -t providers 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # openssl list -providers 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@116 -- # grep name 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # (( 2 != 2 )) 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: openssl base provider != *base* ]] 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@120 -- # [[ name: red hat enterprise linux 9 - openssl fips provider != *fips* ]] 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # NOT openssl md5 /dev/fd/62 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@648 -- # local es=0 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@127 -- # : 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@650 -- # valid_exec_arg openssl md5 /dev/fd/62 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@636 -- # local arg=openssl 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # type -t openssl 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # type -P openssl 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # arg=/usr/bin/openssl 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@642 -- # [[ -x /usr/bin/openssl ]] 00:20:03.444 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # openssl md5 /dev/fd/62 00:20:03.444 Error setting digest 00:20:03.444 0052F616057F0000:error:0308010C:digital envelope routines:inner_evp_generic_fetch:unsupported:crypto/evp/evp_fetch.c:373:Global default library context, Algorithm (MD5 : 97), Properties () 00:20:03.444 0052F616057F0000:error:03000086:digital envelope routines:evp_md_init_internal:initialization error:crypto/evp/digest.c:254: 00:20:03.703 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@651 -- # es=1 00:20:03.703 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:20:03.703 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:20:03.703 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:20:03.703 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@130 -- # nvmftestinit 00:20:03.703 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:03.703 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:03.703 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:03.703 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:03.703 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:03.703 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:03.703 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:03.703 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:03.703 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:03.703 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:03.703 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@285 -- # xtrace_disable 00:20:03.703 12:05:50 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # pci_devs=() 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # net_devs=() 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # e810=() 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@296 -- # local -ga e810 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # x722=() 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@297 -- # local -ga x722 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # mlx=() 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@298 -- # local -ga mlx 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:08.981 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.981 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:08.982 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:08.982 Found net devices under 0000:86:00.0: cvl_0_0 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:08.982 Found net devices under 0000:86:00.1: cvl_0_1 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@414 -- # is_hw=yes 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:08.982 12:05:55 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:08.982 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:08.982 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.183 ms 00:20:08.982 00:20:08.982 --- 10.0.0.2 ping statistics --- 00:20:08.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.982 rtt min/avg/max/mdev = 0.183/0.183/0.183/0.000 ms 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:08.982 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:08.982 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.414 ms 00:20:08.982 00:20:08.982 --- 10.0.0.1 ping statistics --- 00:20:08.982 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:08.982 rtt min/avg/max/mdev = 0.414/0.414/0.414/0.000 ms 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@422 -- # return 0 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@131 -- # nvmfappstart -m 0x2 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@481 -- # nvmfpid=367912 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@482 -- # waitforlisten 367912 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 367912 ']' 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:08.982 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:20:08.982 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:08.982 [2024-07-25 12:05:56.166840] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:20:08.982 [2024-07-25 12:05:56.166888] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:08.982 EAL: No free 2048 kB hugepages reported on node 1 00:20:08.982 [2024-07-25 12:05:56.226459] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:09.241 [2024-07-25 12:05:56.306915] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:09.241 [2024-07-25 12:05:56.306950] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:09.241 [2024-07-25 12:05:56.306957] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:09.242 [2024-07-25 12:05:56.306963] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:09.242 [2024-07-25 12:05:56.306968] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:09.242 [2024-07-25 12:05:56.306984] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:09.810 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:09.810 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:09.810 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:09.810 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:09.810 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:09.810 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:09.810 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@133 -- # trap cleanup EXIT 00:20:09.810 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@136 -- # key=NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:09.810 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@137 -- # key_path=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:09.810 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@138 -- # echo -n NVMeTLSkey-1:01:VRLbtnN9AQb2WXW3c9+wEf/DRLz0QuLdbYvEhwtdWwNf9LrZ: 00:20:09.810 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@139 -- # chmod 0600 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:09.810 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@141 -- # setup_nvmf_tgt_conf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:09.810 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@22 -- # local key=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:09.810 12:05:56 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:20:10.068 [2024-07-25 12:05:57.134390] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:10.068 [2024-07-25 12:05:57.150401] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:20:10.068 [2024-07-25 12:05:57.150540] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:10.068 [2024-07-25 12:05:57.178394] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:20:10.068 malloc0 00:20:10.068 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@144 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:20:10.068 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@147 -- # bdevperf_pid=368162 00:20:10.068 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@148 -- # waitforlisten 368162 /var/tmp/bdevperf.sock 00:20:10.068 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@145 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 10 00:20:10.068 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@829 -- # '[' -z 368162 ']' 00:20:10.068 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:20:10.068 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:10.068 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:20:10.068 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:20:10.068 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:10.068 12:05:57 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:10.068 [2024-07-25 12:05:57.261306] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:20:10.068 [2024-07-25 12:05:57.261365] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid368162 ] 00:20:10.068 EAL: No free 2048 kB hugepages reported on node 1 00:20:10.068 [2024-07-25 12:05:57.312749] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:10.327 [2024-07-25 12:05:57.388823] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:10.895 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:10.896 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@862 -- # return 0 00:20:10.896 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@150 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b TLSTEST -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -q nqn.2016-06.io.spdk:host1 --psk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:11.154 [2024-07-25 12:05:58.194312] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:20:11.154 [2024-07-25 12:05:58.194389] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:20:11.154 TLSTESTn1 00:20:11.154 12:05:58 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@154 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:20:11.414 Running I/O for 10 seconds... 00:20:21.397 00:20:21.397 Latency(us) 00:20:21.397 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.397 Job: TLSTESTn1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:20:21.397 Verification LBA range: start 0x0 length 0x2000 00:20:21.397 TLSTESTn1 : 10.08 1124.16 4.39 0.00 0.00 113484.68 6154.69 173242.99 00:20:21.397 =================================================================================================================== 00:20:21.397 Total : 1124.16 4.39 0.00 0.00 113484.68 6154.69 173242.99 00:20:21.397 0 00:20:21.397 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@1 -- # cleanup 00:20:21.397 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@15 -- # process_shm --id 0 00:20:21.397 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@806 -- # type=--id 00:20:21.397 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@807 -- # id=0 00:20:21.397 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@808 -- # '[' --id = --pid ']' 00:20:21.397 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # find /dev/shm -name '*.0' -printf '%f\n' 00:20:21.397 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@812 -- # shm_files=nvmf_trace.0 00:20:21.397 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@814 -- # [[ -z nvmf_trace.0 ]] 00:20:21.397 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@818 -- # for n in $shm_files 00:20:21.397 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@819 -- # tar -C /dev/shm/ -cvzf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvmf_trace.0_shm.tar.gz nvmf_trace.0 00:20:21.397 nvmf_trace.0 00:20:21.397 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@821 -- # return 0 00:20:21.397 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@16 -- # killprocess 368162 00:20:21.397 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 368162 ']' 00:20:21.397 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 368162 00:20:21.397 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:21.397 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:21.397 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 368162 00:20:21.657 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:20:21.657 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:20:21.657 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 368162' 00:20:21.657 killing process with pid 368162 00:20:21.657 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 368162 00:20:21.657 Received shutdown signal, test time was about 10.000000 seconds 00:20:21.657 00:20:21.657 Latency(us) 00:20:21.657 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:20:21.657 =================================================================================================================== 00:20:21.657 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:20:21.657 [2024-07-25 12:06:08.658593] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:20:21.657 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 368162 00:20:21.657 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@17 -- # nvmftestfini 00:20:21.657 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:21.657 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@117 -- # sync 00:20:21.657 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:21.657 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@120 -- # set +e 00:20:21.657 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:21.657 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:21.657 rmmod nvme_tcp 00:20:21.657 rmmod nvme_fabrics 00:20:21.657 rmmod nvme_keyring 00:20:21.657 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:21.918 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@124 -- # set -e 00:20:21.918 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@125 -- # return 0 00:20:21.918 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@489 -- # '[' -n 367912 ']' 00:20:21.918 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@490 -- # killprocess 367912 00:20:21.918 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@948 -- # '[' -z 367912 ']' 00:20:21.918 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@952 -- # kill -0 367912 00:20:21.918 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # uname 00:20:21.918 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:21.918 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 367912 00:20:21.918 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:20:21.918 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:20:21.918 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@966 -- # echo 'killing process with pid 367912' 00:20:21.918 killing process with pid 367912 00:20:21.918 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@967 -- # kill 367912 00:20:21.918 [2024-07-25 12:06:08.958190] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:20:21.918 12:06:08 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@972 -- # wait 367912 00:20:21.918 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:21.918 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:21.918 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:21.918 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:21.918 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:21.918 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:21.918 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:21.918 12:06:09 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:24.458 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:24.458 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- fips/fips.sh@18 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/fips/key.txt 00:20:24.458 00:20:24.458 real 0m20.802s 00:20:24.458 user 0m23.610s 00:20:24.458 sys 0m8.059s 00:20:24.458 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@1124 -- # xtrace_disable 00:20:24.458 12:06:11 nvmf_tcp.nvmf_target_extra.nvmf_fips -- common/autotest_common.sh@10 -- # set +x 00:20:24.458 ************************************ 00:20:24.458 END TEST nvmf_fips 00:20:24.458 ************************************ 00:20:24.458 12:06:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:20:24.459 12:06:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@45 -- # '[' 0 -eq 1 ']' 00:20:24.459 12:06:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@51 -- # [[ phy == phy ]] 00:20:24.459 12:06:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@52 -- # '[' tcp = tcp ']' 00:20:24.459 12:06:11 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@53 -- # gather_supported_nvmf_pci_devs 00:20:24.459 12:06:11 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@285 -- # xtrace_disable 00:20:24.459 12:06:11 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # pci_devs=() 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # net_devs=() 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # e810=() 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@296 -- # local -ga e810 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # x722=() 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@297 -- # local -ga x722 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # mlx=() 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@298 -- # local -ga mlx 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:28.681 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:28.681 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:28.681 Found net devices under 0000:86:00.0: cvl_0_0 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:28.681 Found net devices under 0000:86:00.1: cvl_0_1 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@54 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@55 -- # (( 2 > 0 )) 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@56 -- # run_test nvmf_perf_adq /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:20:28.681 12:06:15 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:20:28.681 ************************************ 00:20:28.681 START TEST nvmf_perf_adq 00:20:28.681 ************************************ 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/perf_adq.sh --transport=tcp 00:20:28.682 * Looking for test storage... 00:20:28.682 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # uname -s 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@5 -- # export PATH 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@47 -- # : 0 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@51 -- # have_pci_nics=0 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@11 -- # gather_supported_nvmf_pci_devs 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:28.682 12:06:15 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:33.960 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:33.960 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:33.960 Found net devices under 0000:86:00.0: cvl_0_0 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:33.960 Found net devices under 0000:86:00.1: cvl_0_1 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@12 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@13 -- # (( 2 == 0 )) 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@18 -- # perf=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@60 -- # adq_reload_driver 00:20:33.960 12:06:20 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:34.900 12:06:22 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:36.832 12:06:23 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@68 -- # nvmftestinit 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:20:42.112 Found 0000:86:00.0 (0x8086 - 0x159b) 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:20:42.112 Found 0000:86:00.1 (0x8086 - 0x159b) 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:20:42.112 Found net devices under 0000:86:00.0: cvl_0_0 00:20:42.112 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:20:42.113 Found net devices under 0000:86:00.1: cvl_0_1 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:20:42.113 12:06:28 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:20:42.113 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:20:42.113 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.198 ms 00:20:42.113 00:20:42.113 --- 10.0.0.2 ping statistics --- 00:20:42.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.113 rtt min/avg/max/mdev = 0.198/0.198/0.198/0.000 ms 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:20:42.113 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:20:42.113 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.443 ms 00:20:42.113 00:20:42.113 --- 10.0.0.1 ping statistics --- 00:20:42.113 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:20:42.113 rtt min/avg/max/mdev = 0.443/0.443/0.443/0.000 ms 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@69 -- # nvmfappstart -m 0xF --wait-for-rpc 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=378373 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 378373 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 378373 ']' 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:42.113 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:20:42.113 12:06:29 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:42.113 [2024-07-25 12:06:29.189118] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:20:42.113 [2024-07-25 12:06:29.189163] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:20:42.113 EAL: No free 2048 kB hugepages reported on node 1 00:20:42.113 [2024-07-25 12:06:29.249620] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:20:42.113 [2024-07-25 12:06:29.331394] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:20:42.113 [2024-07-25 12:06:29.331430] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:20:42.113 [2024-07-25 12:06:29.331437] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:20:42.113 [2024-07-25 12:06:29.331443] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:20:42.113 [2024-07-25 12:06:29.331448] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:20:42.113 [2024-07-25 12:06:29.331494] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:20:42.113 [2024-07-25 12:06:29.331513] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:20:42.113 [2024-07-25 12:06:29.331600] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:20:42.113 [2024-07-25 12:06:29.331601] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@70 -- # adq_configure_nvmf_target 0 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 0 --enable-zerocopy-send-server -i posix 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 0 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.052 [2024-07-25 12:06:30.202967] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.052 Malloc1 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.052 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.053 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.053 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:20:43.053 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:43.053 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:43.053 [2024-07-25 12:06:30.254767] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:20:43.053 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:43.053 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@74 -- # perfpid=378522 00:20:43.053 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@75 -- # sleep 2 00:20:43.053 12:06:30 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@71 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:20:43.053 EAL: No free 2048 kB hugepages reported on node 1 00:20:45.589 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # rpc_cmd nvmf_get_stats 00:20:45.589 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:20:45.589 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:20:45.589 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:20:45.589 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@77 -- # nvmf_stats='{ 00:20:45.589 "tick_rate": 2300000000, 00:20:45.589 "poll_groups": [ 00:20:45.589 { 00:20:45.589 "name": "nvmf_tgt_poll_group_000", 00:20:45.589 "admin_qpairs": 1, 00:20:45.589 "io_qpairs": 1, 00:20:45.589 "current_admin_qpairs": 1, 00:20:45.589 "current_io_qpairs": 1, 00:20:45.589 "pending_bdev_io": 0, 00:20:45.589 "completed_nvme_io": 19528, 00:20:45.589 "transports": [ 00:20:45.589 { 00:20:45.589 "trtype": "TCP" 00:20:45.589 } 00:20:45.589 ] 00:20:45.589 }, 00:20:45.589 { 00:20:45.589 "name": "nvmf_tgt_poll_group_001", 00:20:45.589 "admin_qpairs": 0, 00:20:45.589 "io_qpairs": 1, 00:20:45.589 "current_admin_qpairs": 0, 00:20:45.589 "current_io_qpairs": 1, 00:20:45.589 "pending_bdev_io": 0, 00:20:45.589 "completed_nvme_io": 19532, 00:20:45.589 "transports": [ 00:20:45.589 { 00:20:45.589 "trtype": "TCP" 00:20:45.589 } 00:20:45.589 ] 00:20:45.589 }, 00:20:45.589 { 00:20:45.589 "name": "nvmf_tgt_poll_group_002", 00:20:45.589 "admin_qpairs": 0, 00:20:45.589 "io_qpairs": 1, 00:20:45.589 "current_admin_qpairs": 0, 00:20:45.589 "current_io_qpairs": 1, 00:20:45.589 "pending_bdev_io": 0, 00:20:45.589 "completed_nvme_io": 19428, 00:20:45.589 "transports": [ 00:20:45.589 { 00:20:45.589 "trtype": "TCP" 00:20:45.589 } 00:20:45.589 ] 00:20:45.589 }, 00:20:45.589 { 00:20:45.589 "name": "nvmf_tgt_poll_group_003", 00:20:45.589 "admin_qpairs": 0, 00:20:45.589 "io_qpairs": 1, 00:20:45.589 "current_admin_qpairs": 0, 00:20:45.589 "current_io_qpairs": 1, 00:20:45.589 "pending_bdev_io": 0, 00:20:45.589 "completed_nvme_io": 19373, 00:20:45.589 "transports": [ 00:20:45.589 { 00:20:45.589 "trtype": "TCP" 00:20:45.589 } 00:20:45.589 ] 00:20:45.589 } 00:20:45.589 ] 00:20:45.589 }' 00:20:45.589 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 1) | length' 00:20:45.589 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # wc -l 00:20:45.589 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@78 -- # count=4 00:20:45.589 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@79 -- # [[ 4 -ne 4 ]] 00:20:45.589 12:06:32 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@83 -- # wait 378522 00:20:53.714 Initializing NVMe Controllers 00:20:53.714 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:20:53.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:20:53.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:20:53.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:20:53.714 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:20:53.714 Initialization complete. Launching workers. 00:20:53.714 ======================================================== 00:20:53.714 Latency(us) 00:20:53.714 Device Information : IOPS MiB/s Average min max 00:20:53.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 10231.70 39.97 6255.39 1824.73 14471.62 00:20:53.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 10318.20 40.31 6202.36 1707.03 11541.53 00:20:53.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 10074.10 39.35 6353.05 1605.76 12562.80 00:20:53.714 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 10341.00 40.39 6188.65 1767.11 12287.25 00:20:53.714 ======================================================== 00:20:53.714 Total : 40965.00 160.02 6249.20 1605.76 14471.62 00:20:53.714 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@84 -- # nvmftestfini 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:20:53.714 rmmod nvme_tcp 00:20:53.714 rmmod nvme_fabrics 00:20:53.714 rmmod nvme_keyring 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 378373 ']' 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 378373 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 378373 ']' 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 378373 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 378373 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 378373' 00:20:53.714 killing process with pid 378373 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 378373 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 378373 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:20:53.714 12:06:40 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:20:55.685 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:20:55.685 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@86 -- # adq_reload_driver 00:20:55.685 12:06:42 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@53 -- # rmmod ice 00:20:57.066 12:06:43 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@54 -- # modprobe ice 00:20:58.446 12:06:45 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@55 -- # sleep 5 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@89 -- # nvmftestinit 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@285 -- # xtrace_disable 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # pci_devs=() 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # net_devs=() 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # e810=() 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@296 -- # local -ga e810 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # x722=() 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@297 -- # local -ga x722 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # mlx=() 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@298 -- # local -ga mlx 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:03.729 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:03.729 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:03.729 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:03.730 Found net devices under 0000:86:00.0: cvl_0_0 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:03.730 Found net devices under 0000:86:00.1: cvl_0_1 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@414 -- # is_hw=yes 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:03.730 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:03.730 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.178 ms 00:21:03.730 00:21:03.730 --- 10.0.0.2 ping statistics --- 00:21:03.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.730 rtt min/avg/max/mdev = 0.178/0.178/0.178/0.000 ms 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:03.730 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:03.730 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.387 ms 00:21:03.730 00:21:03.730 --- 10.0.0.1 ping statistics --- 00:21:03.730 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:03.730 rtt min/avg/max/mdev = 0.387/0.387/0.387/0.000 ms 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@422 -- # return 0 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@90 -- # adq_configure_driver 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@22 -- # ip netns exec cvl_0_0_ns_spdk ethtool --offload cvl_0_0 hw-tc-offload on 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@24 -- # ip netns exec cvl_0_0_ns_spdk ethtool --set-priv-flags cvl_0_0 channel-pkt-inspect-optimize off 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@26 -- # sysctl -w net.core.busy_poll=1 00:21:03.730 net.core.busy_poll = 1 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@27 -- # sysctl -w net.core.busy_read=1 00:21:03.730 net.core.busy_read = 1 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@29 -- # tc=/usr/sbin/tc 00:21:03.730 12:06:50 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@31 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 root mqprio num_tc 2 map 0 1 queues 2@0 2@2 hw 1 mode channel 00:21:03.990 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@33 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc qdisc add dev cvl_0_0 ingress 00:21:03.990 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@35 -- # ip netns exec cvl_0_0_ns_spdk /usr/sbin/tc filter add dev cvl_0_0 protocol ip parent ffff: prio 1 flower dst_ip 10.0.0.2/32 ip_proto tcp dst_port 4420 skip_sw hw_tc 1 00:21:03.990 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@38 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/nvmf/set_xps_rxqs cvl_0_0 00:21:03.990 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@91 -- # nvmfappstart -m 0xF --wait-for-rpc 00:21:03.990 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:03.990 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:03.990 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.990 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:21:03.990 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@481 -- # nvmfpid=382191 00:21:03.990 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@482 -- # waitforlisten 382191 00:21:03.990 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@829 -- # '[' -z 382191 ']' 00:21:03.990 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:03.990 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:03.990 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:03.990 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:03.990 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:03.990 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:03.990 [2024-07-25 12:06:51.185966] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:21:03.990 [2024-07-25 12:06:51.186011] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:03.990 EAL: No free 2048 kB hugepages reported on node 1 00:21:04.249 [2024-07-25 12:06:51.242793] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:04.249 [2024-07-25 12:06:51.328497] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:04.249 [2024-07-25 12:06:51.328534] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:04.249 [2024-07-25 12:06:51.328541] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:04.249 [2024-07-25 12:06:51.328547] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:04.249 [2024-07-25 12:06:51.328553] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:04.249 [2024-07-25 12:06:51.328589] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:04.249 [2024-07-25 12:06:51.328686] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:04.249 [2024-07-25 12:06:51.328786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:04.250 [2024-07-25 12:06:51.328787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:04.819 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:04.819 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@862 -- # return 0 00:21:04.819 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:04.819 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:04.819 12:06:51 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.819 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:04.819 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@92 -- # adq_configure_nvmf_target 1 00:21:04.819 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # rpc_cmd sock_get_default_impl 00:21:04.819 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # jq -r .impl_name 00:21:04.819 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:04.819 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:04.819 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@42 -- # socket_impl=posix 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@43 -- # rpc_cmd sock_impl_set_options --enable-placement-id 1 --enable-zerocopy-send-server -i posix 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@44 -- # rpc_cmd framework_start_init 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@45 -- # rpc_cmd nvmf_create_transport -t tcp -o --io-unit-size 8192 --sock-priority 1 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.079 [2024-07-25 12:06:52.169955] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@46 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.079 Malloc1 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@47 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@48 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@49 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:05.079 [2024-07-25 12:06:52.217674] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@96 -- # perfpid=382443 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@93 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 64 -o 4096 -w randread -t 10 -c 0xF0 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:21:05.079 12:06:52 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@97 -- # sleep 2 00:21:05.079 EAL: No free 2048 kB hugepages reported on node 1 00:21:06.983 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # rpc_cmd nvmf_get_stats 00:21:06.983 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:06.983 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:07.243 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:07.243 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@99 -- # nvmf_stats='{ 00:21:07.243 "tick_rate": 2300000000, 00:21:07.243 "poll_groups": [ 00:21:07.243 { 00:21:07.243 "name": "nvmf_tgt_poll_group_000", 00:21:07.243 "admin_qpairs": 1, 00:21:07.243 "io_qpairs": 2, 00:21:07.243 "current_admin_qpairs": 1, 00:21:07.243 "current_io_qpairs": 2, 00:21:07.243 "pending_bdev_io": 0, 00:21:07.243 "completed_nvme_io": 27163, 00:21:07.243 "transports": [ 00:21:07.243 { 00:21:07.243 "trtype": "TCP" 00:21:07.243 } 00:21:07.243 ] 00:21:07.243 }, 00:21:07.243 { 00:21:07.243 "name": "nvmf_tgt_poll_group_001", 00:21:07.243 "admin_qpairs": 0, 00:21:07.243 "io_qpairs": 2, 00:21:07.243 "current_admin_qpairs": 0, 00:21:07.243 "current_io_qpairs": 2, 00:21:07.243 "pending_bdev_io": 0, 00:21:07.243 "completed_nvme_io": 25594, 00:21:07.243 "transports": [ 00:21:07.243 { 00:21:07.243 "trtype": "TCP" 00:21:07.243 } 00:21:07.243 ] 00:21:07.243 }, 00:21:07.243 { 00:21:07.243 "name": "nvmf_tgt_poll_group_002", 00:21:07.243 "admin_qpairs": 0, 00:21:07.243 "io_qpairs": 0, 00:21:07.243 "current_admin_qpairs": 0, 00:21:07.243 "current_io_qpairs": 0, 00:21:07.243 "pending_bdev_io": 0, 00:21:07.243 "completed_nvme_io": 0, 00:21:07.243 "transports": [ 00:21:07.243 { 00:21:07.243 "trtype": "TCP" 00:21:07.243 } 00:21:07.243 ] 00:21:07.243 }, 00:21:07.243 { 00:21:07.243 "name": "nvmf_tgt_poll_group_003", 00:21:07.243 "admin_qpairs": 0, 00:21:07.243 "io_qpairs": 0, 00:21:07.243 "current_admin_qpairs": 0, 00:21:07.243 "current_io_qpairs": 0, 00:21:07.243 "pending_bdev_io": 0, 00:21:07.243 "completed_nvme_io": 0, 00:21:07.243 "transports": [ 00:21:07.243 { 00:21:07.243 "trtype": "TCP" 00:21:07.243 } 00:21:07.243 ] 00:21:07.243 } 00:21:07.243 ] 00:21:07.243 }' 00:21:07.243 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # jq -r '.poll_groups[] | select(.current_io_qpairs == 0) | length' 00:21:07.243 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # wc -l 00:21:07.243 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@100 -- # count=2 00:21:07.243 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@101 -- # [[ 2 -lt 2 ]] 00:21:07.243 12:06:54 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@106 -- # wait 382443 00:21:15.374 Initializing NVMe Controllers 00:21:15.374 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:21:15.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 4 00:21:15.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 5 00:21:15.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 6 00:21:15.374 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 7 00:21:15.374 Initialization complete. Launching workers. 00:21:15.374 ======================================================== 00:21:15.374 Latency(us) 00:21:15.374 Device Information : IOPS MiB/s Average min max 00:21:15.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 4: 7153.30 27.94 8949.14 1681.38 54206.91 00:21:15.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 5: 7151.20 27.93 8952.52 1696.77 54495.75 00:21:15.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 6: 7086.20 27.68 9044.57 1849.95 53735.31 00:21:15.374 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 7: 6842.40 26.73 9358.94 1847.87 54568.01 00:21:15.374 ======================================================== 00:21:15.374 Total : 28233.09 110.29 9073.26 1681.38 54568.01 00:21:15.374 00:21:15.374 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@107 -- # nvmftestfini 00:21:15.374 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:15.374 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@117 -- # sync 00:21:15.374 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:15.375 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@120 -- # set +e 00:21:15.375 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:15.375 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:15.375 rmmod nvme_tcp 00:21:15.375 rmmod nvme_fabrics 00:21:15.375 rmmod nvme_keyring 00:21:15.375 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:15.375 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@124 -- # set -e 00:21:15.375 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@125 -- # return 0 00:21:15.375 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@489 -- # '[' -n 382191 ']' 00:21:15.375 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@490 -- # killprocess 382191 00:21:15.375 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@948 -- # '[' -z 382191 ']' 00:21:15.375 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@952 -- # kill -0 382191 00:21:15.375 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # uname 00:21:15.375 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:15.375 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 382191 00:21:15.375 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:15.375 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:15.375 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@966 -- # echo 'killing process with pid 382191' 00:21:15.375 killing process with pid 382191 00:21:15.375 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@967 -- # kill 382191 00:21:15.375 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@972 -- # wait 382191 00:21:15.634 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:15.634 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:15.634 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:15.634 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:15.634 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:15.634 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:15.634 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:15.634 12:07:02 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.927 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:18.927 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- target/perf_adq.sh@109 -- # trap - SIGINT SIGTERM EXIT 00:21:18.927 00:21:18.927 real 0m49.987s 00:21:18.927 user 2m48.892s 00:21:18.927 sys 0m9.545s 00:21:18.927 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:18.927 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_perf_adq -- common/autotest_common.sh@10 -- # set +x 00:21:18.927 ************************************ 00:21:18.927 END TEST nvmf_perf_adq 00:21:18.927 ************************************ 00:21:18.927 12:07:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:21:18.927 12:07:05 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@63 -- # run_test nvmf_shutdown /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:18.927 12:07:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:18.927 12:07:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:18.927 12:07:05 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:18.928 ************************************ 00:21:18.928 START TEST nvmf_shutdown 00:21:18.928 ************************************ 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh --transport=tcp 00:21:18.928 * Looking for test storage... 00:21:18.928 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # uname -s 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@5 -- # export PATH 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@47 -- # : 0 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@147 -- # run_test nvmf_shutdown_tc1 nvmf_shutdown_tc1 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:18.928 ************************************ 00:21:18.928 START TEST nvmf_shutdown_tc1 00:21:18.928 ************************************ 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc1 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@74 -- # starttarget 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:18.928 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:18.929 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:18.929 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:18.929 12:07:05 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # net_devs=() 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # e810=() 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@296 -- # local -ga e810 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # x722=() 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@297 -- # local -ga x722 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # mlx=() 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:24.271 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:24.272 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:24.272 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:24.272 Found net devices under 0000:86:00.0: cvl_0_0 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:24.272 Found net devices under 0000:86:00.1: cvl_0_1 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:24.272 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:24.272 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:21:24.272 00:21:24.272 --- 10.0.0.2 ping statistics --- 00:21:24.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.272 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:24.272 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:24.272 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.376 ms 00:21:24.272 00:21:24.272 --- 10.0.0.1 ping statistics --- 00:21:24.272 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:24.272 rtt min/avg/max/mdev = 0.376/0.376/0.376/0.000 ms 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@422 -- # return 0 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:24.272 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:24.273 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:24.273 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:24.273 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:24.273 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:24.273 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:24.273 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@481 -- # nvmfpid=387681 00:21:24.273 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@482 -- # waitforlisten 387681 00:21:24.273 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:24.273 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 387681 ']' 00:21:24.273 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:24.273 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:24.273 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:24.273 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:24.273 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:24.273 12:07:11 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:24.273 [2024-07-25 12:07:11.467619] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:21:24.273 [2024-07-25 12:07:11.467660] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:24.273 EAL: No free 2048 kB hugepages reported on node 1 00:21:24.532 [2024-07-25 12:07:11.526789] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:24.532 [2024-07-25 12:07:11.608640] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:24.532 [2024-07-25 12:07:11.608673] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:24.532 [2024-07-25 12:07:11.608680] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:24.532 [2024-07-25 12:07:11.608686] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:24.532 [2024-07-25 12:07:11.608692] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:24.532 [2024-07-25 12:07:11.608786] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:24.532 [2024-07-25 12:07:11.608806] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:24.532 [2024-07-25 12:07:11.609385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:24.532 [2024-07-25 12:07:11.609385] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.102 [2024-07-25 12:07:12.328555] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:25.102 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@28 -- # cat 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:25.362 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.362 Malloc1 00:21:25.362 [2024-07-25 12:07:12.424475] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:25.362 Malloc2 00:21:25.362 Malloc3 00:21:25.362 Malloc4 00:21:25.362 Malloc5 00:21:25.622 Malloc6 00:21:25.622 Malloc7 00:21:25.622 Malloc8 00:21:25.622 Malloc9 00:21:25.622 Malloc10 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@78 -- # perfpid=387970 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@79 -- # waitforlisten 387970 /var/tmp/bdevperf.sock 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@829 -- # '[' -z 387970 ']' 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json /dev/fd/63 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@77 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:25.622 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.622 { 00:21:25.622 "params": { 00:21:25.622 "name": "Nvme$subsystem", 00:21:25.622 "trtype": "$TEST_TRANSPORT", 00:21:25.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.622 "adrfam": "ipv4", 00:21:25.622 "trsvcid": "$NVMF_PORT", 00:21:25.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.622 "hdgst": ${hdgst:-false}, 00:21:25.622 "ddgst": ${ddgst:-false} 00:21:25.622 }, 00:21:25.622 "method": "bdev_nvme_attach_controller" 00:21:25.622 } 00:21:25.622 EOF 00:21:25.622 )") 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.622 { 00:21:25.622 "params": { 00:21:25.622 "name": "Nvme$subsystem", 00:21:25.622 "trtype": "$TEST_TRANSPORT", 00:21:25.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.622 "adrfam": "ipv4", 00:21:25.622 "trsvcid": "$NVMF_PORT", 00:21:25.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.622 "hdgst": ${hdgst:-false}, 00:21:25.622 "ddgst": ${ddgst:-false} 00:21:25.622 }, 00:21:25.622 "method": "bdev_nvme_attach_controller" 00:21:25.622 } 00:21:25.622 EOF 00:21:25.622 )") 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.622 { 00:21:25.622 "params": { 00:21:25.622 "name": "Nvme$subsystem", 00:21:25.622 "trtype": "$TEST_TRANSPORT", 00:21:25.622 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.622 "adrfam": "ipv4", 00:21:25.622 "trsvcid": "$NVMF_PORT", 00:21:25.622 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.622 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.622 "hdgst": ${hdgst:-false}, 00:21:25.622 "ddgst": ${ddgst:-false} 00:21:25.622 }, 00:21:25.622 "method": "bdev_nvme_attach_controller" 00:21:25.622 } 00:21:25.622 EOF 00:21:25.622 )") 00:21:25.622 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.882 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.882 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.882 { 00:21:25.882 "params": { 00:21:25.882 "name": "Nvme$subsystem", 00:21:25.882 "trtype": "$TEST_TRANSPORT", 00:21:25.882 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.882 "adrfam": "ipv4", 00:21:25.882 "trsvcid": "$NVMF_PORT", 00:21:25.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.883 "hdgst": ${hdgst:-false}, 00:21:25.883 "ddgst": ${ddgst:-false} 00:21:25.883 }, 00:21:25.883 "method": "bdev_nvme_attach_controller" 00:21:25.883 } 00:21:25.883 EOF 00:21:25.883 )") 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.883 { 00:21:25.883 "params": { 00:21:25.883 "name": "Nvme$subsystem", 00:21:25.883 "trtype": "$TEST_TRANSPORT", 00:21:25.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.883 "adrfam": "ipv4", 00:21:25.883 "trsvcid": "$NVMF_PORT", 00:21:25.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.883 "hdgst": ${hdgst:-false}, 00:21:25.883 "ddgst": ${ddgst:-false} 00:21:25.883 }, 00:21:25.883 "method": "bdev_nvme_attach_controller" 00:21:25.883 } 00:21:25.883 EOF 00:21:25.883 )") 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.883 { 00:21:25.883 "params": { 00:21:25.883 "name": "Nvme$subsystem", 00:21:25.883 "trtype": "$TEST_TRANSPORT", 00:21:25.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.883 "adrfam": "ipv4", 00:21:25.883 "trsvcid": "$NVMF_PORT", 00:21:25.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.883 "hdgst": ${hdgst:-false}, 00:21:25.883 "ddgst": ${ddgst:-false} 00:21:25.883 }, 00:21:25.883 "method": "bdev_nvme_attach_controller" 00:21:25.883 } 00:21:25.883 EOF 00:21:25.883 )") 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.883 { 00:21:25.883 "params": { 00:21:25.883 "name": "Nvme$subsystem", 00:21:25.883 "trtype": "$TEST_TRANSPORT", 00:21:25.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.883 "adrfam": "ipv4", 00:21:25.883 "trsvcid": "$NVMF_PORT", 00:21:25.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.883 "hdgst": ${hdgst:-false}, 00:21:25.883 "ddgst": ${ddgst:-false} 00:21:25.883 }, 00:21:25.883 "method": "bdev_nvme_attach_controller" 00:21:25.883 } 00:21:25.883 EOF 00:21:25.883 )") 00:21:25.883 [2024-07-25 12:07:12.895340] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:21:25.883 [2024-07-25 12:07:12.895390] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk1 --proc-type=auto ] 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.883 { 00:21:25.883 "params": { 00:21:25.883 "name": "Nvme$subsystem", 00:21:25.883 "trtype": "$TEST_TRANSPORT", 00:21:25.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.883 "adrfam": "ipv4", 00:21:25.883 "trsvcid": "$NVMF_PORT", 00:21:25.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.883 "hdgst": ${hdgst:-false}, 00:21:25.883 "ddgst": ${ddgst:-false} 00:21:25.883 }, 00:21:25.883 "method": "bdev_nvme_attach_controller" 00:21:25.883 } 00:21:25.883 EOF 00:21:25.883 )") 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.883 { 00:21:25.883 "params": { 00:21:25.883 "name": "Nvme$subsystem", 00:21:25.883 "trtype": "$TEST_TRANSPORT", 00:21:25.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.883 "adrfam": "ipv4", 00:21:25.883 "trsvcid": "$NVMF_PORT", 00:21:25.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.883 "hdgst": ${hdgst:-false}, 00:21:25.883 "ddgst": ${ddgst:-false} 00:21:25.883 }, 00:21:25.883 "method": "bdev_nvme_attach_controller" 00:21:25.883 } 00:21:25.883 EOF 00:21:25.883 )") 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:25.883 { 00:21:25.883 "params": { 00:21:25.883 "name": "Nvme$subsystem", 00:21:25.883 "trtype": "$TEST_TRANSPORT", 00:21:25.883 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:25.883 "adrfam": "ipv4", 00:21:25.883 "trsvcid": "$NVMF_PORT", 00:21:25.883 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:25.883 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:25.883 "hdgst": ${hdgst:-false}, 00:21:25.883 "ddgst": ${ddgst:-false} 00:21:25.883 }, 00:21:25.883 "method": "bdev_nvme_attach_controller" 00:21:25.883 } 00:21:25.883 EOF 00:21:25.883 )") 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:25.883 EAL: No free 2048 kB hugepages reported on node 1 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:25.883 12:07:12 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:25.883 "params": { 00:21:25.883 "name": "Nvme1", 00:21:25.883 "trtype": "tcp", 00:21:25.883 "traddr": "10.0.0.2", 00:21:25.883 "adrfam": "ipv4", 00:21:25.883 "trsvcid": "4420", 00:21:25.883 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:25.883 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:25.883 "hdgst": false, 00:21:25.883 "ddgst": false 00:21:25.883 }, 00:21:25.883 "method": "bdev_nvme_attach_controller" 00:21:25.883 },{ 00:21:25.883 "params": { 00:21:25.883 "name": "Nvme2", 00:21:25.883 "trtype": "tcp", 00:21:25.883 "traddr": "10.0.0.2", 00:21:25.883 "adrfam": "ipv4", 00:21:25.883 "trsvcid": "4420", 00:21:25.883 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:25.883 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:25.883 "hdgst": false, 00:21:25.883 "ddgst": false 00:21:25.883 }, 00:21:25.883 "method": "bdev_nvme_attach_controller" 00:21:25.883 },{ 00:21:25.883 "params": { 00:21:25.883 "name": "Nvme3", 00:21:25.883 "trtype": "tcp", 00:21:25.883 "traddr": "10.0.0.2", 00:21:25.883 "adrfam": "ipv4", 00:21:25.883 "trsvcid": "4420", 00:21:25.883 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:25.883 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:25.883 "hdgst": false, 00:21:25.883 "ddgst": false 00:21:25.883 }, 00:21:25.883 "method": "bdev_nvme_attach_controller" 00:21:25.883 },{ 00:21:25.883 "params": { 00:21:25.883 "name": "Nvme4", 00:21:25.883 "trtype": "tcp", 00:21:25.883 "traddr": "10.0.0.2", 00:21:25.883 "adrfam": "ipv4", 00:21:25.883 "trsvcid": "4420", 00:21:25.883 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:25.883 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:25.883 "hdgst": false, 00:21:25.883 "ddgst": false 00:21:25.883 }, 00:21:25.883 "method": "bdev_nvme_attach_controller" 00:21:25.883 },{ 00:21:25.883 "params": { 00:21:25.883 "name": "Nvme5", 00:21:25.883 "trtype": "tcp", 00:21:25.883 "traddr": "10.0.0.2", 00:21:25.883 "adrfam": "ipv4", 00:21:25.883 "trsvcid": "4420", 00:21:25.883 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:25.883 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:25.883 "hdgst": false, 00:21:25.883 "ddgst": false 00:21:25.883 }, 00:21:25.883 "method": "bdev_nvme_attach_controller" 00:21:25.883 },{ 00:21:25.883 "params": { 00:21:25.883 "name": "Nvme6", 00:21:25.884 "trtype": "tcp", 00:21:25.884 "traddr": "10.0.0.2", 00:21:25.884 "adrfam": "ipv4", 00:21:25.884 "trsvcid": "4420", 00:21:25.884 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:25.884 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:25.884 "hdgst": false, 00:21:25.884 "ddgst": false 00:21:25.884 }, 00:21:25.884 "method": "bdev_nvme_attach_controller" 00:21:25.884 },{ 00:21:25.884 "params": { 00:21:25.884 "name": "Nvme7", 00:21:25.884 "trtype": "tcp", 00:21:25.884 "traddr": "10.0.0.2", 00:21:25.884 "adrfam": "ipv4", 00:21:25.884 "trsvcid": "4420", 00:21:25.884 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:25.884 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:25.884 "hdgst": false, 00:21:25.884 "ddgst": false 00:21:25.884 }, 00:21:25.884 "method": "bdev_nvme_attach_controller" 00:21:25.884 },{ 00:21:25.884 "params": { 00:21:25.884 "name": "Nvme8", 00:21:25.884 "trtype": "tcp", 00:21:25.884 "traddr": "10.0.0.2", 00:21:25.884 "adrfam": "ipv4", 00:21:25.884 "trsvcid": "4420", 00:21:25.884 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:25.884 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:25.884 "hdgst": false, 00:21:25.884 "ddgst": false 00:21:25.884 }, 00:21:25.884 "method": "bdev_nvme_attach_controller" 00:21:25.884 },{ 00:21:25.884 "params": { 00:21:25.884 "name": "Nvme9", 00:21:25.884 "trtype": "tcp", 00:21:25.884 "traddr": "10.0.0.2", 00:21:25.884 "adrfam": "ipv4", 00:21:25.884 "trsvcid": "4420", 00:21:25.884 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:25.884 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:25.884 "hdgst": false, 00:21:25.884 "ddgst": false 00:21:25.884 }, 00:21:25.884 "method": "bdev_nvme_attach_controller" 00:21:25.884 },{ 00:21:25.884 "params": { 00:21:25.884 "name": "Nvme10", 00:21:25.884 "trtype": "tcp", 00:21:25.884 "traddr": "10.0.0.2", 00:21:25.884 "adrfam": "ipv4", 00:21:25.884 "trsvcid": "4420", 00:21:25.884 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:25.884 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:25.884 "hdgst": false, 00:21:25.884 "ddgst": false 00:21:25.884 }, 00:21:25.884 "method": "bdev_nvme_attach_controller" 00:21:25.884 }' 00:21:25.884 [2024-07-25 12:07:12.951980] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:25.884 [2024-07-25 12:07:13.026401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:27.263 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:27.263 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@862 -- # return 0 00:21:27.263 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@80 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:27.263 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:27.263 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:27.263 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:27.263 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@83 -- # kill -9 387970 00:21:27.263 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@84 -- # rm -f /var/run/spdk_bdev1 00:21:27.263 12:07:14 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@87 -- # sleep 1 00:21:28.644 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 73: 387970 Killed $rootdir/test/app/bdev_svc/bdev_svc -m 0x1 -i 1 -r /var/tmp/bdevperf.sock --json <(gen_nvmf_target_json "${num_subsystems[@]}") 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@88 -- # kill -0 387681 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -o 65536 -w verify -t 1 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@91 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # config=() 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@532 -- # local subsystem config 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.644 { 00:21:28.644 "params": { 00:21:28.644 "name": "Nvme$subsystem", 00:21:28.644 "trtype": "$TEST_TRANSPORT", 00:21:28.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.644 "adrfam": "ipv4", 00:21:28.644 "trsvcid": "$NVMF_PORT", 00:21:28.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.644 "hdgst": ${hdgst:-false}, 00:21:28.644 "ddgst": ${ddgst:-false} 00:21:28.644 }, 00:21:28.644 "method": "bdev_nvme_attach_controller" 00:21:28.644 } 00:21:28.644 EOF 00:21:28.644 )") 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.644 { 00:21:28.644 "params": { 00:21:28.644 "name": "Nvme$subsystem", 00:21:28.644 "trtype": "$TEST_TRANSPORT", 00:21:28.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.644 "adrfam": "ipv4", 00:21:28.644 "trsvcid": "$NVMF_PORT", 00:21:28.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.644 "hdgst": ${hdgst:-false}, 00:21:28.644 "ddgst": ${ddgst:-false} 00:21:28.644 }, 00:21:28.644 "method": "bdev_nvme_attach_controller" 00:21:28.644 } 00:21:28.644 EOF 00:21:28.644 )") 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.644 { 00:21:28.644 "params": { 00:21:28.644 "name": "Nvme$subsystem", 00:21:28.644 "trtype": "$TEST_TRANSPORT", 00:21:28.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.644 "adrfam": "ipv4", 00:21:28.644 "trsvcid": "$NVMF_PORT", 00:21:28.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.644 "hdgst": ${hdgst:-false}, 00:21:28.644 "ddgst": ${ddgst:-false} 00:21:28.644 }, 00:21:28.644 "method": "bdev_nvme_attach_controller" 00:21:28.644 } 00:21:28.644 EOF 00:21:28.644 )") 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.644 { 00:21:28.644 "params": { 00:21:28.644 "name": "Nvme$subsystem", 00:21:28.644 "trtype": "$TEST_TRANSPORT", 00:21:28.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.644 "adrfam": "ipv4", 00:21:28.644 "trsvcid": "$NVMF_PORT", 00:21:28.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.644 "hdgst": ${hdgst:-false}, 00:21:28.644 "ddgst": ${ddgst:-false} 00:21:28.644 }, 00:21:28.644 "method": "bdev_nvme_attach_controller" 00:21:28.644 } 00:21:28.644 EOF 00:21:28.644 )") 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.644 { 00:21:28.644 "params": { 00:21:28.644 "name": "Nvme$subsystem", 00:21:28.644 "trtype": "$TEST_TRANSPORT", 00:21:28.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.644 "adrfam": "ipv4", 00:21:28.644 "trsvcid": "$NVMF_PORT", 00:21:28.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.644 "hdgst": ${hdgst:-false}, 00:21:28.644 "ddgst": ${ddgst:-false} 00:21:28.644 }, 00:21:28.644 "method": "bdev_nvme_attach_controller" 00:21:28.644 } 00:21:28.644 EOF 00:21:28.644 )") 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.644 { 00:21:28.644 "params": { 00:21:28.644 "name": "Nvme$subsystem", 00:21:28.644 "trtype": "$TEST_TRANSPORT", 00:21:28.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.644 "adrfam": "ipv4", 00:21:28.644 "trsvcid": "$NVMF_PORT", 00:21:28.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.644 "hdgst": ${hdgst:-false}, 00:21:28.644 "ddgst": ${ddgst:-false} 00:21:28.644 }, 00:21:28.644 "method": "bdev_nvme_attach_controller" 00:21:28.644 } 00:21:28.644 EOF 00:21:28.644 )") 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.644 { 00:21:28.644 "params": { 00:21:28.644 "name": "Nvme$subsystem", 00:21:28.644 "trtype": "$TEST_TRANSPORT", 00:21:28.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.644 "adrfam": "ipv4", 00:21:28.644 "trsvcid": "$NVMF_PORT", 00:21:28.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.644 "hdgst": ${hdgst:-false}, 00:21:28.644 "ddgst": ${ddgst:-false} 00:21:28.644 }, 00:21:28.644 "method": "bdev_nvme_attach_controller" 00:21:28.644 } 00:21:28.644 EOF 00:21:28.644 )") 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.644 [2024-07-25 12:07:15.538668] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:21:28.644 [2024-07-25 12:07:15.538719] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid388430 ] 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.644 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.644 { 00:21:28.644 "params": { 00:21:28.644 "name": "Nvme$subsystem", 00:21:28.644 "trtype": "$TEST_TRANSPORT", 00:21:28.644 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.644 "adrfam": "ipv4", 00:21:28.644 "trsvcid": "$NVMF_PORT", 00:21:28.644 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.644 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.644 "hdgst": ${hdgst:-false}, 00:21:28.644 "ddgst": ${ddgst:-false} 00:21:28.644 }, 00:21:28.645 "method": "bdev_nvme_attach_controller" 00:21:28.645 } 00:21:28.645 EOF 00:21:28.645 )") 00:21:28.645 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.645 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.645 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.645 { 00:21:28.645 "params": { 00:21:28.645 "name": "Nvme$subsystem", 00:21:28.645 "trtype": "$TEST_TRANSPORT", 00:21:28.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.645 "adrfam": "ipv4", 00:21:28.645 "trsvcid": "$NVMF_PORT", 00:21:28.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.645 "hdgst": ${hdgst:-false}, 00:21:28.645 "ddgst": ${ddgst:-false} 00:21:28.645 }, 00:21:28.645 "method": "bdev_nvme_attach_controller" 00:21:28.645 } 00:21:28.645 EOF 00:21:28.645 )") 00:21:28.645 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.645 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:28.645 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:28.645 { 00:21:28.645 "params": { 00:21:28.645 "name": "Nvme$subsystem", 00:21:28.645 "trtype": "$TEST_TRANSPORT", 00:21:28.645 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:28.645 "adrfam": "ipv4", 00:21:28.645 "trsvcid": "$NVMF_PORT", 00:21:28.645 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:28.645 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:28.645 "hdgst": ${hdgst:-false}, 00:21:28.645 "ddgst": ${ddgst:-false} 00:21:28.645 }, 00:21:28.645 "method": "bdev_nvme_attach_controller" 00:21:28.645 } 00:21:28.645 EOF 00:21:28.645 )") 00:21:28.645 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@554 -- # cat 00:21:28.645 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@556 -- # jq . 00:21:28.645 EAL: No free 2048 kB hugepages reported on node 1 00:21:28.645 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@557 -- # IFS=, 00:21:28.645 12:07:15 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:28.645 "params": { 00:21:28.645 "name": "Nvme1", 00:21:28.645 "trtype": "tcp", 00:21:28.645 "traddr": "10.0.0.2", 00:21:28.645 "adrfam": "ipv4", 00:21:28.645 "trsvcid": "4420", 00:21:28.645 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:28.645 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:28.645 "hdgst": false, 00:21:28.645 "ddgst": false 00:21:28.645 }, 00:21:28.645 "method": "bdev_nvme_attach_controller" 00:21:28.645 },{ 00:21:28.645 "params": { 00:21:28.645 "name": "Nvme2", 00:21:28.645 "trtype": "tcp", 00:21:28.645 "traddr": "10.0.0.2", 00:21:28.645 "adrfam": "ipv4", 00:21:28.645 "trsvcid": "4420", 00:21:28.645 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:28.645 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:28.645 "hdgst": false, 00:21:28.645 "ddgst": false 00:21:28.645 }, 00:21:28.645 "method": "bdev_nvme_attach_controller" 00:21:28.645 },{ 00:21:28.645 "params": { 00:21:28.645 "name": "Nvme3", 00:21:28.645 "trtype": "tcp", 00:21:28.645 "traddr": "10.0.0.2", 00:21:28.645 "adrfam": "ipv4", 00:21:28.645 "trsvcid": "4420", 00:21:28.645 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:28.645 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:28.645 "hdgst": false, 00:21:28.645 "ddgst": false 00:21:28.645 }, 00:21:28.645 "method": "bdev_nvme_attach_controller" 00:21:28.645 },{ 00:21:28.645 "params": { 00:21:28.645 "name": "Nvme4", 00:21:28.645 "trtype": "tcp", 00:21:28.645 "traddr": "10.0.0.2", 00:21:28.645 "adrfam": "ipv4", 00:21:28.645 "trsvcid": "4420", 00:21:28.645 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:28.645 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:28.645 "hdgst": false, 00:21:28.645 "ddgst": false 00:21:28.645 }, 00:21:28.645 "method": "bdev_nvme_attach_controller" 00:21:28.645 },{ 00:21:28.645 "params": { 00:21:28.645 "name": "Nvme5", 00:21:28.645 "trtype": "tcp", 00:21:28.645 "traddr": "10.0.0.2", 00:21:28.645 "adrfam": "ipv4", 00:21:28.645 "trsvcid": "4420", 00:21:28.645 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:28.645 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:28.645 "hdgst": false, 00:21:28.645 "ddgst": false 00:21:28.645 }, 00:21:28.645 "method": "bdev_nvme_attach_controller" 00:21:28.645 },{ 00:21:28.645 "params": { 00:21:28.645 "name": "Nvme6", 00:21:28.645 "trtype": "tcp", 00:21:28.645 "traddr": "10.0.0.2", 00:21:28.645 "adrfam": "ipv4", 00:21:28.645 "trsvcid": "4420", 00:21:28.645 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:28.645 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:28.645 "hdgst": false, 00:21:28.645 "ddgst": false 00:21:28.645 }, 00:21:28.645 "method": "bdev_nvme_attach_controller" 00:21:28.645 },{ 00:21:28.645 "params": { 00:21:28.645 "name": "Nvme7", 00:21:28.645 "trtype": "tcp", 00:21:28.645 "traddr": "10.0.0.2", 00:21:28.645 "adrfam": "ipv4", 00:21:28.645 "trsvcid": "4420", 00:21:28.645 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:28.645 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:28.645 "hdgst": false, 00:21:28.645 "ddgst": false 00:21:28.645 }, 00:21:28.645 "method": "bdev_nvme_attach_controller" 00:21:28.645 },{ 00:21:28.645 "params": { 00:21:28.645 "name": "Nvme8", 00:21:28.645 "trtype": "tcp", 00:21:28.645 "traddr": "10.0.0.2", 00:21:28.645 "adrfam": "ipv4", 00:21:28.645 "trsvcid": "4420", 00:21:28.645 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:28.645 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:28.645 "hdgst": false, 00:21:28.645 "ddgst": false 00:21:28.645 }, 00:21:28.645 "method": "bdev_nvme_attach_controller" 00:21:28.645 },{ 00:21:28.645 "params": { 00:21:28.645 "name": "Nvme9", 00:21:28.645 "trtype": "tcp", 00:21:28.645 "traddr": "10.0.0.2", 00:21:28.645 "adrfam": "ipv4", 00:21:28.645 "trsvcid": "4420", 00:21:28.645 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:28.645 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:28.645 "hdgst": false, 00:21:28.645 "ddgst": false 00:21:28.645 }, 00:21:28.645 "method": "bdev_nvme_attach_controller" 00:21:28.645 },{ 00:21:28.645 "params": { 00:21:28.645 "name": "Nvme10", 00:21:28.645 "trtype": "tcp", 00:21:28.645 "traddr": "10.0.0.2", 00:21:28.645 "adrfam": "ipv4", 00:21:28.645 "trsvcid": "4420", 00:21:28.645 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:28.645 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:28.645 "hdgst": false, 00:21:28.645 "ddgst": false 00:21:28.645 }, 00:21:28.645 "method": "bdev_nvme_attach_controller" 00:21:28.645 }' 00:21:28.645 [2024-07-25 12:07:15.594877] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:28.645 [2024-07-25 12:07:15.670104] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:30.025 Running I/O for 1 seconds... 00:21:31.406 00:21:31.406 Latency(us) 00:21:31.406 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:31.406 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.406 Verification LBA range: start 0x0 length 0x400 00:21:31.406 Nvme1n1 : 1.02 188.20 11.76 0.00 0.00 336507.77 24960.67 295424.89 00:21:31.406 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.406 Verification LBA range: start 0x0 length 0x400 00:21:31.406 Nvme2n1 : 1.13 283.83 17.74 0.00 0.00 220147.36 23365.01 227951.30 00:21:31.406 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.406 Verification LBA range: start 0x0 length 0x400 00:21:31.406 Nvme3n1 : 1.14 280.85 17.55 0.00 0.00 219509.63 20743.57 299072.11 00:21:31.406 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.406 Verification LBA range: start 0x0 length 0x400 00:21:31.406 Nvme4n1 : 1.12 171.56 10.72 0.00 0.00 353821.38 25188.62 328249.88 00:21:31.406 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.406 Verification LBA range: start 0x0 length 0x400 00:21:31.406 Nvme5n1 : 1.11 172.58 10.79 0.00 0.00 346354.50 26898.25 311837.38 00:21:31.406 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.406 Verification LBA range: start 0x0 length 0x400 00:21:31.406 Nvme6n1 : 1.11 173.17 10.82 0.00 0.00 339140.49 77503.44 269894.34 00:21:31.406 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.406 Verification LBA range: start 0x0 length 0x400 00:21:31.406 Nvme7n1 : 1.14 279.88 17.49 0.00 0.00 207627.31 23137.06 222480.47 00:21:31.406 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.406 Verification LBA range: start 0x0 length 0x400 00:21:31.406 Nvme8n1 : 1.13 282.19 17.64 0.00 0.00 202482.38 23137.06 237069.36 00:21:31.406 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.406 Verification LBA range: start 0x0 length 0x400 00:21:31.406 Nvme9n1 : 1.15 278.10 17.38 0.00 0.00 202832.50 15956.59 249834.63 00:21:31.406 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:31.406 Verification LBA range: start 0x0 length 0x400 00:21:31.406 Nvme10n1 : 1.16 276.00 17.25 0.00 0.00 201475.03 14075.99 246187.41 00:21:31.406 =================================================================================================================== 00:21:31.406 Total : 2386.36 149.15 0.00 0.00 247567.70 14075.99 328249.88 00:21:31.406 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@94 -- # stoptarget 00:21:31.406 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:31.406 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:31.406 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:31.406 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:31.406 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:31.407 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@117 -- # sync 00:21:31.407 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:31.407 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@120 -- # set +e 00:21:31.407 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:31.407 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:31.407 rmmod nvme_tcp 00:21:31.407 rmmod nvme_fabrics 00:21:31.407 rmmod nvme_keyring 00:21:31.407 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:31.407 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@124 -- # set -e 00:21:31.407 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@125 -- # return 0 00:21:31.407 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@489 -- # '[' -n 387681 ']' 00:21:31.407 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@490 -- # killprocess 387681 00:21:31.407 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@948 -- # '[' -z 387681 ']' 00:21:31.407 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@952 -- # kill -0 387681 00:21:31.407 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # uname 00:21:31.407 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:31.407 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 387681 00:21:31.667 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:31.667 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:31.667 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 387681' 00:21:31.667 killing process with pid 387681 00:21:31.667 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@967 -- # kill 387681 00:21:31.667 12:07:18 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@972 -- # wait 387681 00:21:31.927 12:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:31.927 12:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:31.927 12:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:31.927 12:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:31.927 12:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:31.927 12:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:31.927 12:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:31.927 12:07:19 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:33.871 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:33.871 00:21:33.871 real 0m15.139s 00:21:33.871 user 0m35.303s 00:21:33.871 sys 0m5.405s 00:21:33.871 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:33.871 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc1 -- common/autotest_common.sh@10 -- # set +x 00:21:33.871 ************************************ 00:21:33.871 END TEST nvmf_shutdown_tc1 00:21:33.871 ************************************ 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@148 -- # run_test nvmf_shutdown_tc2 nvmf_shutdown_tc2 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:34.131 ************************************ 00:21:34.131 START TEST nvmf_shutdown_tc2 00:21:34.131 ************************************ 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc2 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@99 -- # starttarget 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:34.131 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # net_devs=() 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # e810=() 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@296 -- # local -ga e810 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # x722=() 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@297 -- # local -ga x722 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # mlx=() 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:34.132 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:34.132 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:34.132 Found net devices under 0000:86:00.0: cvl_0_0 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:34.132 Found net devices under 0000:86:00.1: cvl_0_1 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:34.132 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:34.392 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:34.392 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.188 ms 00:21:34.392 00:21:34.392 --- 10.0.0.2 ping statistics --- 00:21:34.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.392 rtt min/avg/max/mdev = 0.188/0.188/0.188/0.000 ms 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:34.392 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:34.392 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.431 ms 00:21:34.392 00:21:34.392 --- 10.0.0.1 ping statistics --- 00:21:34.392 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:34.392 rtt min/avg/max/mdev = 0.431/0.431/0.431/0.000 ms 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@422 -- # return 0 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@481 -- # nvmfpid=389558 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@482 -- # waitforlisten 389558 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 389558 ']' 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:34.392 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:34.392 12:07:21 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:34.392 [2024-07-25 12:07:21.555080] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:21:34.392 [2024-07-25 12:07:21.555123] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:34.392 EAL: No free 2048 kB hugepages reported on node 1 00:21:34.392 [2024-07-25 12:07:21.613176] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:34.651 [2024-07-25 12:07:21.696256] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:34.651 [2024-07-25 12:07:21.696291] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:34.651 [2024-07-25 12:07:21.696298] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:34.651 [2024-07-25 12:07:21.696304] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:34.651 [2024-07-25 12:07:21.696310] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:34.651 [2024-07-25 12:07:21.696404] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:34.651 [2024-07-25 12:07:21.696509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:34.651 [2024-07-25 12:07:21.696936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:34.651 [2024-07-25 12:07:21.696936] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.219 [2024-07-25 12:07:22.421499] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.219 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:35.480 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:35.480 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@28 -- # cat 00:21:35.480 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:35.480 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:35.480 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.480 Malloc1 00:21:35.480 [2024-07-25 12:07:22.517111] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:35.480 Malloc2 00:21:35.480 Malloc3 00:21:35.480 Malloc4 00:21:35.480 Malloc5 00:21:35.480 Malloc6 00:21:35.740 Malloc7 00:21:35.740 Malloc8 00:21:35.740 Malloc9 00:21:35.740 Malloc10 00:21:35.740 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@103 -- # perfpid=389837 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@104 -- # waitforlisten 389837 /var/tmp/bdevperf.sock 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@829 -- # '[' -z 389837 ']' 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@102 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:35.741 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # config=() 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@532 -- # local subsystem config 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:35.741 { 00:21:35.741 "params": { 00:21:35.741 "name": "Nvme$subsystem", 00:21:35.741 "trtype": "$TEST_TRANSPORT", 00:21:35.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.741 "adrfam": "ipv4", 00:21:35.741 "trsvcid": "$NVMF_PORT", 00:21:35.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.741 "hdgst": ${hdgst:-false}, 00:21:35.741 "ddgst": ${ddgst:-false} 00:21:35.741 }, 00:21:35.741 "method": "bdev_nvme_attach_controller" 00:21:35.741 } 00:21:35.741 EOF 00:21:35.741 )") 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:35.741 { 00:21:35.741 "params": { 00:21:35.741 "name": "Nvme$subsystem", 00:21:35.741 "trtype": "$TEST_TRANSPORT", 00:21:35.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.741 "adrfam": "ipv4", 00:21:35.741 "trsvcid": "$NVMF_PORT", 00:21:35.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.741 "hdgst": ${hdgst:-false}, 00:21:35.741 "ddgst": ${ddgst:-false} 00:21:35.741 }, 00:21:35.741 "method": "bdev_nvme_attach_controller" 00:21:35.741 } 00:21:35.741 EOF 00:21:35.741 )") 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:35.741 { 00:21:35.741 "params": { 00:21:35.741 "name": "Nvme$subsystem", 00:21:35.741 "trtype": "$TEST_TRANSPORT", 00:21:35.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.741 "adrfam": "ipv4", 00:21:35.741 "trsvcid": "$NVMF_PORT", 00:21:35.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.741 "hdgst": ${hdgst:-false}, 00:21:35.741 "ddgst": ${ddgst:-false} 00:21:35.741 }, 00:21:35.741 "method": "bdev_nvme_attach_controller" 00:21:35.741 } 00:21:35.741 EOF 00:21:35.741 )") 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:35.741 { 00:21:35.741 "params": { 00:21:35.741 "name": "Nvme$subsystem", 00:21:35.741 "trtype": "$TEST_TRANSPORT", 00:21:35.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.741 "adrfam": "ipv4", 00:21:35.741 "trsvcid": "$NVMF_PORT", 00:21:35.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.741 "hdgst": ${hdgst:-false}, 00:21:35.741 "ddgst": ${ddgst:-false} 00:21:35.741 }, 00:21:35.741 "method": "bdev_nvme_attach_controller" 00:21:35.741 } 00:21:35.741 EOF 00:21:35.741 )") 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:35.741 { 00:21:35.741 "params": { 00:21:35.741 "name": "Nvme$subsystem", 00:21:35.741 "trtype": "$TEST_TRANSPORT", 00:21:35.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.741 "adrfam": "ipv4", 00:21:35.741 "trsvcid": "$NVMF_PORT", 00:21:35.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.741 "hdgst": ${hdgst:-false}, 00:21:35.741 "ddgst": ${ddgst:-false} 00:21:35.741 }, 00:21:35.741 "method": "bdev_nvme_attach_controller" 00:21:35.741 } 00:21:35.741 EOF 00:21:35.741 )") 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:35.741 { 00:21:35.741 "params": { 00:21:35.741 "name": "Nvme$subsystem", 00:21:35.741 "trtype": "$TEST_TRANSPORT", 00:21:35.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.741 "adrfam": "ipv4", 00:21:35.741 "trsvcid": "$NVMF_PORT", 00:21:35.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.741 "hdgst": ${hdgst:-false}, 00:21:35.741 "ddgst": ${ddgst:-false} 00:21:35.741 }, 00:21:35.741 "method": "bdev_nvme_attach_controller" 00:21:35.741 } 00:21:35.741 EOF 00:21:35.741 )") 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:35.741 { 00:21:35.741 "params": { 00:21:35.741 "name": "Nvme$subsystem", 00:21:35.741 "trtype": "$TEST_TRANSPORT", 00:21:35.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.741 "adrfam": "ipv4", 00:21:35.741 "trsvcid": "$NVMF_PORT", 00:21:35.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.741 "hdgst": ${hdgst:-false}, 00:21:35.741 "ddgst": ${ddgst:-false} 00:21:35.741 }, 00:21:35.741 "method": "bdev_nvme_attach_controller" 00:21:35.741 } 00:21:35.741 EOF 00:21:35.741 )") 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:35.741 [2024-07-25 12:07:22.986152] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:21:35.741 [2024-07-25 12:07:22.986204] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid389837 ] 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:35.741 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:35.741 { 00:21:35.741 "params": { 00:21:35.741 "name": "Nvme$subsystem", 00:21:35.741 "trtype": "$TEST_TRANSPORT", 00:21:35.741 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:35.741 "adrfam": "ipv4", 00:21:35.741 "trsvcid": "$NVMF_PORT", 00:21:35.741 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:35.741 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:35.741 "hdgst": ${hdgst:-false}, 00:21:35.741 "ddgst": ${ddgst:-false} 00:21:35.741 }, 00:21:35.741 "method": "bdev_nvme_attach_controller" 00:21:35.741 } 00:21:35.741 EOF 00:21:35.741 )") 00:21:36.002 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:36.002 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.002 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.002 { 00:21:36.002 "params": { 00:21:36.002 "name": "Nvme$subsystem", 00:21:36.002 "trtype": "$TEST_TRANSPORT", 00:21:36.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.002 "adrfam": "ipv4", 00:21:36.002 "trsvcid": "$NVMF_PORT", 00:21:36.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.002 "hdgst": ${hdgst:-false}, 00:21:36.002 "ddgst": ${ddgst:-false} 00:21:36.002 }, 00:21:36.002 "method": "bdev_nvme_attach_controller" 00:21:36.002 } 00:21:36.002 EOF 00:21:36.002 )") 00:21:36.002 12:07:22 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:36.002 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:36.002 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:36.002 { 00:21:36.002 "params": { 00:21:36.002 "name": "Nvme$subsystem", 00:21:36.002 "trtype": "$TEST_TRANSPORT", 00:21:36.002 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:36.002 "adrfam": "ipv4", 00:21:36.002 "trsvcid": "$NVMF_PORT", 00:21:36.002 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:36.002 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:36.002 "hdgst": ${hdgst:-false}, 00:21:36.002 "ddgst": ${ddgst:-false} 00:21:36.002 }, 00:21:36.002 "method": "bdev_nvme_attach_controller" 00:21:36.002 } 00:21:36.002 EOF 00:21:36.002 )") 00:21:36.002 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@554 -- # cat 00:21:36.002 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@556 -- # jq . 00:21:36.002 EAL: No free 2048 kB hugepages reported on node 1 00:21:36.002 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@557 -- # IFS=, 00:21:36.002 12:07:23 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:36.002 "params": { 00:21:36.002 "name": "Nvme1", 00:21:36.002 "trtype": "tcp", 00:21:36.002 "traddr": "10.0.0.2", 00:21:36.002 "adrfam": "ipv4", 00:21:36.002 "trsvcid": "4420", 00:21:36.002 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:36.002 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:36.002 "hdgst": false, 00:21:36.002 "ddgst": false 00:21:36.002 }, 00:21:36.002 "method": "bdev_nvme_attach_controller" 00:21:36.002 },{ 00:21:36.002 "params": { 00:21:36.002 "name": "Nvme2", 00:21:36.002 "trtype": "tcp", 00:21:36.002 "traddr": "10.0.0.2", 00:21:36.002 "adrfam": "ipv4", 00:21:36.002 "trsvcid": "4420", 00:21:36.002 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:36.002 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:36.002 "hdgst": false, 00:21:36.002 "ddgst": false 00:21:36.002 }, 00:21:36.002 "method": "bdev_nvme_attach_controller" 00:21:36.002 },{ 00:21:36.002 "params": { 00:21:36.002 "name": "Nvme3", 00:21:36.002 "trtype": "tcp", 00:21:36.002 "traddr": "10.0.0.2", 00:21:36.002 "adrfam": "ipv4", 00:21:36.002 "trsvcid": "4420", 00:21:36.002 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:36.002 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:36.002 "hdgst": false, 00:21:36.002 "ddgst": false 00:21:36.002 }, 00:21:36.002 "method": "bdev_nvme_attach_controller" 00:21:36.002 },{ 00:21:36.002 "params": { 00:21:36.002 "name": "Nvme4", 00:21:36.002 "trtype": "tcp", 00:21:36.002 "traddr": "10.0.0.2", 00:21:36.002 "adrfam": "ipv4", 00:21:36.002 "trsvcid": "4420", 00:21:36.002 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:36.002 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:36.002 "hdgst": false, 00:21:36.002 "ddgst": false 00:21:36.002 }, 00:21:36.002 "method": "bdev_nvme_attach_controller" 00:21:36.002 },{ 00:21:36.002 "params": { 00:21:36.002 "name": "Nvme5", 00:21:36.002 "trtype": "tcp", 00:21:36.002 "traddr": "10.0.0.2", 00:21:36.002 "adrfam": "ipv4", 00:21:36.002 "trsvcid": "4420", 00:21:36.002 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:36.002 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:36.002 "hdgst": false, 00:21:36.002 "ddgst": false 00:21:36.002 }, 00:21:36.002 "method": "bdev_nvme_attach_controller" 00:21:36.002 },{ 00:21:36.002 "params": { 00:21:36.002 "name": "Nvme6", 00:21:36.002 "trtype": "tcp", 00:21:36.002 "traddr": "10.0.0.2", 00:21:36.002 "adrfam": "ipv4", 00:21:36.002 "trsvcid": "4420", 00:21:36.002 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:36.002 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:36.002 "hdgst": false, 00:21:36.002 "ddgst": false 00:21:36.002 }, 00:21:36.002 "method": "bdev_nvme_attach_controller" 00:21:36.002 },{ 00:21:36.002 "params": { 00:21:36.002 "name": "Nvme7", 00:21:36.002 "trtype": "tcp", 00:21:36.002 "traddr": "10.0.0.2", 00:21:36.002 "adrfam": "ipv4", 00:21:36.002 "trsvcid": "4420", 00:21:36.002 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:36.002 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:36.002 "hdgst": false, 00:21:36.002 "ddgst": false 00:21:36.002 }, 00:21:36.002 "method": "bdev_nvme_attach_controller" 00:21:36.002 },{ 00:21:36.003 "params": { 00:21:36.003 "name": "Nvme8", 00:21:36.003 "trtype": "tcp", 00:21:36.003 "traddr": "10.0.0.2", 00:21:36.003 "adrfam": "ipv4", 00:21:36.003 "trsvcid": "4420", 00:21:36.003 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:36.003 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:36.003 "hdgst": false, 00:21:36.003 "ddgst": false 00:21:36.003 }, 00:21:36.003 "method": "bdev_nvme_attach_controller" 00:21:36.003 },{ 00:21:36.003 "params": { 00:21:36.003 "name": "Nvme9", 00:21:36.003 "trtype": "tcp", 00:21:36.003 "traddr": "10.0.0.2", 00:21:36.003 "adrfam": "ipv4", 00:21:36.003 "trsvcid": "4420", 00:21:36.003 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:36.003 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:36.003 "hdgst": false, 00:21:36.003 "ddgst": false 00:21:36.003 }, 00:21:36.003 "method": "bdev_nvme_attach_controller" 00:21:36.003 },{ 00:21:36.003 "params": { 00:21:36.003 "name": "Nvme10", 00:21:36.003 "trtype": "tcp", 00:21:36.003 "traddr": "10.0.0.2", 00:21:36.003 "adrfam": "ipv4", 00:21:36.003 "trsvcid": "4420", 00:21:36.003 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:36.003 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:36.003 "hdgst": false, 00:21:36.003 "ddgst": false 00:21:36.003 }, 00:21:36.003 "method": "bdev_nvme_attach_controller" 00:21:36.003 }' 00:21:36.003 [2024-07-25 12:07:23.042804] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:36.003 [2024-07-25 12:07:23.117348] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:37.381 Running I/O for 10 seconds... 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@862 -- # return 0 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@105 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@107 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@57 -- # local ret=1 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@58 -- # local i 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:37.381 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:37.641 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:37.641 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:37.641 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:37.641 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:37.641 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:37.641 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:37.641 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:37.900 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:37.900 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:37.900 12:07:24 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:37.900 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:37.900 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@60 -- # read_io_count=131 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@63 -- # '[' 131 -ge 100 ']' 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@64 -- # ret=0 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@65 -- # break 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@69 -- # return 0 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@110 -- # killprocess 389837 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 389837 ']' 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 389837 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 389837 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 389837' 00:21:38.159 killing process with pid 389837 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 389837 00:21:38.159 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 389837 00:21:38.160 Received shutdown signal, test time was about 0.973943 seconds 00:21:38.160 00:21:38.160 Latency(us) 00:21:38.160 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:38.160 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.160 Verification LBA range: start 0x0 length 0x400 00:21:38.160 Nvme1n1 : 0.94 270.94 16.93 0.00 0.00 233810.59 20971.52 235245.75 00:21:38.160 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.160 Verification LBA range: start 0x0 length 0x400 00:21:38.160 Nvme2n1 : 0.91 282.86 17.68 0.00 0.00 219643.55 34648.60 197861.73 00:21:38.160 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.160 Verification LBA range: start 0x0 length 0x400 00:21:38.160 Nvme3n1 : 0.93 276.61 17.29 0.00 0.00 220952.04 20857.54 226127.69 00:21:38.160 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.160 Verification LBA range: start 0x0 length 0x400 00:21:38.160 Nvme4n1 : 0.90 284.73 17.80 0.00 0.00 210298.43 20401.64 242540.19 00:21:38.160 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.160 Verification LBA range: start 0x0 length 0x400 00:21:38.160 Nvme5n1 : 0.97 263.03 16.44 0.00 0.00 215170.45 22225.25 227951.30 00:21:38.160 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.160 Verification LBA range: start 0x0 length 0x400 00:21:38.160 Nvme6n1 : 0.93 275.88 17.24 0.00 0.00 209433.60 22225.25 237069.36 00:21:38.160 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.160 Verification LBA range: start 0x0 length 0x400 00:21:38.160 Nvme7n1 : 0.94 273.15 17.07 0.00 0.00 208046.75 18919.96 214274.23 00:21:38.160 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.160 Verification LBA range: start 0x0 length 0x400 00:21:38.160 Nvme8n1 : 0.91 210.28 13.14 0.00 0.00 263983.04 36700.16 230686.72 00:21:38.160 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.160 Verification LBA range: start 0x0 length 0x400 00:21:38.160 Nvme9n1 : 0.90 212.86 13.30 0.00 0.00 254676.22 21883.33 230686.72 00:21:38.160 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:38.160 Verification LBA range: start 0x0 length 0x400 00:21:38.160 Nvme10n1 : 0.94 204.14 12.76 0.00 0.00 262891.67 20857.54 310013.77 00:21:38.160 =================================================================================================================== 00:21:38.160 Total : 2554.48 159.65 0.00 0.00 227407.42 18919.96 310013.77 00:21:38.460 12:07:25 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@113 -- # sleep 1 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@114 -- # kill -0 389558 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@116 -- # stoptarget 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@117 -- # sync 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@120 -- # set +e 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:39.426 rmmod nvme_tcp 00:21:39.426 rmmod nvme_fabrics 00:21:39.426 rmmod nvme_keyring 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@124 -- # set -e 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@125 -- # return 0 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@489 -- # '[' -n 389558 ']' 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@490 -- # killprocess 389558 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@948 -- # '[' -z 389558 ']' 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@952 -- # kill -0 389558 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # uname 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:39.426 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 389558 00:21:39.685 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:39.685 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:39.686 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 389558' 00:21:39.686 killing process with pid 389558 00:21:39.686 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@967 -- # kill 389558 00:21:39.686 12:07:26 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@972 -- # wait 389558 00:21:39.944 12:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:39.945 12:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:39.945 12:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:39.945 12:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:39.945 12:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:39.945 12:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:39.945 12:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:39.945 12:07:27 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.484 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:42.484 00:21:42.484 real 0m7.930s 00:21:42.485 user 0m23.846s 00:21:42.485 sys 0m1.428s 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc2 -- common/autotest_common.sh@10 -- # set +x 00:21:42.485 ************************************ 00:21:42.485 END TEST nvmf_shutdown_tc2 00:21:42.485 ************************************ 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@149 -- # run_test nvmf_shutdown_tc3 nvmf_shutdown_tc3 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:42.485 ************************************ 00:21:42.485 START TEST nvmf_shutdown_tc3 00:21:42.485 ************************************ 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1123 -- # nvmf_shutdown_tc3 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@121 -- # starttarget 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@15 -- # nvmftestinit 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@285 -- # xtrace_disable 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # pci_devs=() 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # net_devs=() 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # e810=() 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@296 -- # local -ga e810 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # x722=() 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@297 -- # local -ga x722 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # mlx=() 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@298 -- # local -ga mlx 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:42.485 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:42.485 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:42.485 Found net devices under 0000:86:00.0: cvl_0_0 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:42.485 Found net devices under 0000:86:00.1: cvl_0_1 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@414 -- # is_hw=yes 00:21:42.485 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:42.486 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:42.486 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.185 ms 00:21:42.486 00:21:42.486 --- 10.0.0.2 ping statistics --- 00:21:42.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.486 rtt min/avg/max/mdev = 0.185/0.185/0.185/0.000 ms 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:42.486 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:42.486 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.419 ms 00:21:42.486 00:21:42.486 --- 10.0.0.1 ping statistics --- 00:21:42.486 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:42.486 rtt min/avg/max/mdev = 0.419/0.419/0.419/0.000 ms 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@422 -- # return 0 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@18 -- # nvmfappstart -m 0x1E 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@481 -- # nvmfpid=390988 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@482 -- # waitforlisten 390988 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 390988 ']' 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:42.486 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1E 00:21:42.486 12:07:29 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:42.486 [2024-07-25 12:07:29.495424] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:21:42.486 [2024-07-25 12:07:29.495468] [ DPDK EAL parameters: nvmf -c 0x1E --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:42.486 EAL: No free 2048 kB hugepages reported on node 1 00:21:42.486 [2024-07-25 12:07:29.553449] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:21:42.486 [2024-07-25 12:07:29.633893] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:42.486 [2024-07-25 12:07:29.633926] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:42.486 [2024-07-25 12:07:29.633934] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:42.486 [2024-07-25 12:07:29.633942] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:42.486 [2024-07-25 12:07:29.633947] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:42.486 [2024-07-25 12:07:29.633989] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:42.486 [2024-07-25 12:07:29.634076] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:42.486 [2024-07-25 12:07:29.634182] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:42.486 [2024-07-25 12:07:29.634183] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:21:43.055 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:43.055 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:43.055 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:43.055 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:43.055 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@20 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.316 [2024-07-25 12:07:30.343443] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@22 -- # num_subsystems=({1..10}) 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@24 -- # timing_enter create_subsystems 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@26 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@27 -- # for i in "${num_subsystems[@]}" 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@28 -- # cat 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@35 -- # rpc_cmd 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:43.316 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.316 Malloc1 00:21:43.316 [2024-07-25 12:07:30.439147] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:43.316 Malloc2 00:21:43.316 Malloc3 00:21:43.316 Malloc4 00:21:43.576 Malloc5 00:21:43.576 Malloc6 00:21:43.576 Malloc7 00:21:43.576 Malloc8 00:21:43.576 Malloc9 00:21:43.576 Malloc10 00:21:43.576 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:43.576 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@36 -- # timing_exit create_subsystems 00:21:43.576 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:43.576 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.835 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@125 -- # perfpid=391270 00:21:43.835 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@126 -- # waitforlisten 391270 /var/tmp/bdevperf.sock 00:21:43.835 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@829 -- # '[' -z 391270 ']' 00:21:43.835 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:43.835 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -r /var/tmp/bdevperf.sock --json /dev/fd/63 -q 64 -o 65536 -w verify -t 10 00:21:43.835 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:43.835 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@124 -- # gen_nvmf_target_json 1 2 3 4 5 6 7 8 9 10 00:21:43.835 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:43.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:43.835 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # config=() 00:21:43.835 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:43.835 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@532 -- # local subsystem config 00:21:43.835 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:43.835 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.835 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.835 { 00:21:43.835 "params": { 00:21:43.835 "name": "Nvme$subsystem", 00:21:43.835 "trtype": "$TEST_TRANSPORT", 00:21:43.835 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.835 "adrfam": "ipv4", 00:21:43.835 "trsvcid": "$NVMF_PORT", 00:21:43.835 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.835 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.836 "hdgst": ${hdgst:-false}, 00:21:43.836 "ddgst": ${ddgst:-false} 00:21:43.836 }, 00:21:43.836 "method": "bdev_nvme_attach_controller" 00:21:43.836 } 00:21:43.836 EOF 00:21:43.836 )") 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.836 { 00:21:43.836 "params": { 00:21:43.836 "name": "Nvme$subsystem", 00:21:43.836 "trtype": "$TEST_TRANSPORT", 00:21:43.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.836 "adrfam": "ipv4", 00:21:43.836 "trsvcid": "$NVMF_PORT", 00:21:43.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.836 "hdgst": ${hdgst:-false}, 00:21:43.836 "ddgst": ${ddgst:-false} 00:21:43.836 }, 00:21:43.836 "method": "bdev_nvme_attach_controller" 00:21:43.836 } 00:21:43.836 EOF 00:21:43.836 )") 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.836 { 00:21:43.836 "params": { 00:21:43.836 "name": "Nvme$subsystem", 00:21:43.836 "trtype": "$TEST_TRANSPORT", 00:21:43.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.836 "adrfam": "ipv4", 00:21:43.836 "trsvcid": "$NVMF_PORT", 00:21:43.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.836 "hdgst": ${hdgst:-false}, 00:21:43.836 "ddgst": ${ddgst:-false} 00:21:43.836 }, 00:21:43.836 "method": "bdev_nvme_attach_controller" 00:21:43.836 } 00:21:43.836 EOF 00:21:43.836 )") 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.836 { 00:21:43.836 "params": { 00:21:43.836 "name": "Nvme$subsystem", 00:21:43.836 "trtype": "$TEST_TRANSPORT", 00:21:43.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.836 "adrfam": "ipv4", 00:21:43.836 "trsvcid": "$NVMF_PORT", 00:21:43.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.836 "hdgst": ${hdgst:-false}, 00:21:43.836 "ddgst": ${ddgst:-false} 00:21:43.836 }, 00:21:43.836 "method": "bdev_nvme_attach_controller" 00:21:43.836 } 00:21:43.836 EOF 00:21:43.836 )") 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.836 { 00:21:43.836 "params": { 00:21:43.836 "name": "Nvme$subsystem", 00:21:43.836 "trtype": "$TEST_TRANSPORT", 00:21:43.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.836 "adrfam": "ipv4", 00:21:43.836 "trsvcid": "$NVMF_PORT", 00:21:43.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.836 "hdgst": ${hdgst:-false}, 00:21:43.836 "ddgst": ${ddgst:-false} 00:21:43.836 }, 00:21:43.836 "method": "bdev_nvme_attach_controller" 00:21:43.836 } 00:21:43.836 EOF 00:21:43.836 )") 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.836 { 00:21:43.836 "params": { 00:21:43.836 "name": "Nvme$subsystem", 00:21:43.836 "trtype": "$TEST_TRANSPORT", 00:21:43.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.836 "adrfam": "ipv4", 00:21:43.836 "trsvcid": "$NVMF_PORT", 00:21:43.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.836 "hdgst": ${hdgst:-false}, 00:21:43.836 "ddgst": ${ddgst:-false} 00:21:43.836 }, 00:21:43.836 "method": "bdev_nvme_attach_controller" 00:21:43.836 } 00:21:43.836 EOF 00:21:43.836 )") 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.836 { 00:21:43.836 "params": { 00:21:43.836 "name": "Nvme$subsystem", 00:21:43.836 "trtype": "$TEST_TRANSPORT", 00:21:43.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.836 "adrfam": "ipv4", 00:21:43.836 "trsvcid": "$NVMF_PORT", 00:21:43.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.836 "hdgst": ${hdgst:-false}, 00:21:43.836 "ddgst": ${ddgst:-false} 00:21:43.836 }, 00:21:43.836 "method": "bdev_nvme_attach_controller" 00:21:43.836 } 00:21:43.836 EOF 00:21:43.836 )") 00:21:43.836 [2024-07-25 12:07:30.903355] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:21:43.836 [2024-07-25 12:07:30.903405] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid391270 ] 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.836 { 00:21:43.836 "params": { 00:21:43.836 "name": "Nvme$subsystem", 00:21:43.836 "trtype": "$TEST_TRANSPORT", 00:21:43.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.836 "adrfam": "ipv4", 00:21:43.836 "trsvcid": "$NVMF_PORT", 00:21:43.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.836 "hdgst": ${hdgst:-false}, 00:21:43.836 "ddgst": ${ddgst:-false} 00:21:43.836 }, 00:21:43.836 "method": "bdev_nvme_attach_controller" 00:21:43.836 } 00:21:43.836 EOF 00:21:43.836 )") 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.836 { 00:21:43.836 "params": { 00:21:43.836 "name": "Nvme$subsystem", 00:21:43.836 "trtype": "$TEST_TRANSPORT", 00:21:43.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.836 "adrfam": "ipv4", 00:21:43.836 "trsvcid": "$NVMF_PORT", 00:21:43.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.836 "hdgst": ${hdgst:-false}, 00:21:43.836 "ddgst": ${ddgst:-false} 00:21:43.836 }, 00:21:43.836 "method": "bdev_nvme_attach_controller" 00:21:43.836 } 00:21:43.836 EOF 00:21:43.836 )") 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:21:43.836 { 00:21:43.836 "params": { 00:21:43.836 "name": "Nvme$subsystem", 00:21:43.836 "trtype": "$TEST_TRANSPORT", 00:21:43.836 "traddr": "$NVMF_FIRST_TARGET_IP", 00:21:43.836 "adrfam": "ipv4", 00:21:43.836 "trsvcid": "$NVMF_PORT", 00:21:43.836 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:21:43.836 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:21:43.836 "hdgst": ${hdgst:-false}, 00:21:43.836 "ddgst": ${ddgst:-false} 00:21:43.836 }, 00:21:43.836 "method": "bdev_nvme_attach_controller" 00:21:43.836 } 00:21:43.836 EOF 00:21:43.836 )") 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@554 -- # cat 00:21:43.836 EAL: No free 2048 kB hugepages reported on node 1 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@556 -- # jq . 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@557 -- # IFS=, 00:21:43.836 12:07:30 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:21:43.836 "params": { 00:21:43.836 "name": "Nvme1", 00:21:43.836 "trtype": "tcp", 00:21:43.836 "traddr": "10.0.0.2", 00:21:43.836 "adrfam": "ipv4", 00:21:43.836 "trsvcid": "4420", 00:21:43.836 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:43.836 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:21:43.836 "hdgst": false, 00:21:43.836 "ddgst": false 00:21:43.836 }, 00:21:43.837 "method": "bdev_nvme_attach_controller" 00:21:43.837 },{ 00:21:43.837 "params": { 00:21:43.837 "name": "Nvme2", 00:21:43.837 "trtype": "tcp", 00:21:43.837 "traddr": "10.0.0.2", 00:21:43.837 "adrfam": "ipv4", 00:21:43.837 "trsvcid": "4420", 00:21:43.837 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:43.837 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:21:43.837 "hdgst": false, 00:21:43.837 "ddgst": false 00:21:43.837 }, 00:21:43.837 "method": "bdev_nvme_attach_controller" 00:21:43.837 },{ 00:21:43.837 "params": { 00:21:43.837 "name": "Nvme3", 00:21:43.837 "trtype": "tcp", 00:21:43.837 "traddr": "10.0.0.2", 00:21:43.837 "adrfam": "ipv4", 00:21:43.837 "trsvcid": "4420", 00:21:43.837 "subnqn": "nqn.2016-06.io.spdk:cnode3", 00:21:43.837 "hostnqn": "nqn.2016-06.io.spdk:host3", 00:21:43.837 "hdgst": false, 00:21:43.837 "ddgst": false 00:21:43.837 }, 00:21:43.837 "method": "bdev_nvme_attach_controller" 00:21:43.837 },{ 00:21:43.837 "params": { 00:21:43.837 "name": "Nvme4", 00:21:43.837 "trtype": "tcp", 00:21:43.837 "traddr": "10.0.0.2", 00:21:43.837 "adrfam": "ipv4", 00:21:43.837 "trsvcid": "4420", 00:21:43.837 "subnqn": "nqn.2016-06.io.spdk:cnode4", 00:21:43.837 "hostnqn": "nqn.2016-06.io.spdk:host4", 00:21:43.837 "hdgst": false, 00:21:43.837 "ddgst": false 00:21:43.837 }, 00:21:43.837 "method": "bdev_nvme_attach_controller" 00:21:43.837 },{ 00:21:43.837 "params": { 00:21:43.837 "name": "Nvme5", 00:21:43.837 "trtype": "tcp", 00:21:43.837 "traddr": "10.0.0.2", 00:21:43.837 "adrfam": "ipv4", 00:21:43.837 "trsvcid": "4420", 00:21:43.837 "subnqn": "nqn.2016-06.io.spdk:cnode5", 00:21:43.837 "hostnqn": "nqn.2016-06.io.spdk:host5", 00:21:43.837 "hdgst": false, 00:21:43.837 "ddgst": false 00:21:43.837 }, 00:21:43.837 "method": "bdev_nvme_attach_controller" 00:21:43.837 },{ 00:21:43.837 "params": { 00:21:43.837 "name": "Nvme6", 00:21:43.837 "trtype": "tcp", 00:21:43.837 "traddr": "10.0.0.2", 00:21:43.837 "adrfam": "ipv4", 00:21:43.837 "trsvcid": "4420", 00:21:43.837 "subnqn": "nqn.2016-06.io.spdk:cnode6", 00:21:43.837 "hostnqn": "nqn.2016-06.io.spdk:host6", 00:21:43.837 "hdgst": false, 00:21:43.837 "ddgst": false 00:21:43.837 }, 00:21:43.837 "method": "bdev_nvme_attach_controller" 00:21:43.837 },{ 00:21:43.837 "params": { 00:21:43.837 "name": "Nvme7", 00:21:43.837 "trtype": "tcp", 00:21:43.837 "traddr": "10.0.0.2", 00:21:43.837 "adrfam": "ipv4", 00:21:43.837 "trsvcid": "4420", 00:21:43.837 "subnqn": "nqn.2016-06.io.spdk:cnode7", 00:21:43.837 "hostnqn": "nqn.2016-06.io.spdk:host7", 00:21:43.837 "hdgst": false, 00:21:43.837 "ddgst": false 00:21:43.837 }, 00:21:43.837 "method": "bdev_nvme_attach_controller" 00:21:43.837 },{ 00:21:43.837 "params": { 00:21:43.837 "name": "Nvme8", 00:21:43.837 "trtype": "tcp", 00:21:43.837 "traddr": "10.0.0.2", 00:21:43.837 "adrfam": "ipv4", 00:21:43.837 "trsvcid": "4420", 00:21:43.837 "subnqn": "nqn.2016-06.io.spdk:cnode8", 00:21:43.837 "hostnqn": "nqn.2016-06.io.spdk:host8", 00:21:43.837 "hdgst": false, 00:21:43.837 "ddgst": false 00:21:43.837 }, 00:21:43.837 "method": "bdev_nvme_attach_controller" 00:21:43.837 },{ 00:21:43.837 "params": { 00:21:43.837 "name": "Nvme9", 00:21:43.837 "trtype": "tcp", 00:21:43.837 "traddr": "10.0.0.2", 00:21:43.837 "adrfam": "ipv4", 00:21:43.837 "trsvcid": "4420", 00:21:43.837 "subnqn": "nqn.2016-06.io.spdk:cnode9", 00:21:43.837 "hostnqn": "nqn.2016-06.io.spdk:host9", 00:21:43.837 "hdgst": false, 00:21:43.837 "ddgst": false 00:21:43.837 }, 00:21:43.837 "method": "bdev_nvme_attach_controller" 00:21:43.837 },{ 00:21:43.837 "params": { 00:21:43.837 "name": "Nvme10", 00:21:43.837 "trtype": "tcp", 00:21:43.837 "traddr": "10.0.0.2", 00:21:43.837 "adrfam": "ipv4", 00:21:43.837 "trsvcid": "4420", 00:21:43.837 "subnqn": "nqn.2016-06.io.spdk:cnode10", 00:21:43.837 "hostnqn": "nqn.2016-06.io.spdk:host10", 00:21:43.837 "hdgst": false, 00:21:43.837 "ddgst": false 00:21:43.837 }, 00:21:43.837 "method": "bdev_nvme_attach_controller" 00:21:43.837 }' 00:21:43.837 [2024-07-25 12:07:30.959145] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:43.837 [2024-07-25 12:07:31.033621] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:45.742 Running I/O for 10 seconds... 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@862 -- # return 0 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@127 -- # rpc_cmd -s /var/tmp/bdevperf.sock framework_wait_init 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@130 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill -9 $perfpid || true; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@132 -- # waitforio /var/tmp/bdevperf.sock Nvme1n1 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@50 -- # '[' -z /var/tmp/bdevperf.sock ']' 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@54 -- # '[' -z Nvme1n1 ']' 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@57 -- # local ret=1 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@58 -- # local i 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i = 10 )) 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=3 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 3 -ge 100 ']' 00:21:45.742 12:07:32 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:46.001 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:46.001 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:46.001 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:46.001 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:46.001 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.001 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:46.001 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.001 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=67 00:21:46.001 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 67 -ge 100 ']' 00:21:46.001 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@67 -- # sleep 0.25 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i-- )) 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@59 -- # (( i != 0 )) 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_get_iostat -b Nvme1n1 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # jq -r '.bdevs[0].num_read_ops' 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@60 -- # read_io_count=195 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@63 -- # '[' 195 -ge 100 ']' 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@64 -- # ret=0 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@65 -- # break 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@69 -- # return 0 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@135 -- # killprocess 390988 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@948 -- # '[' -z 390988 ']' 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@952 -- # kill -0 390988 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # uname 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 390988 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@966 -- # echo 'killing process with pid 390988' 00:21:46.273 killing process with pid 390988 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@967 -- # kill 390988 00:21:46.273 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@972 -- # wait 390988 00:21:46.273 [2024-07-25 12:07:33.445945] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.445994] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446002] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446009] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446015] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446022] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446028] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446033] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446041] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446054] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446067] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446073] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446079] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446085] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446098] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446116] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446122] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446127] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446134] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446140] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446146] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446152] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446158] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446164] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446181] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446188] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446194] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446212] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446218] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446224] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446236] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446242] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446253] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446259] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446265] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446278] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446284] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446308] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446314] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446320] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446331] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446337] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446343] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446349] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446354] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446366] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.446378] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477180 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.447249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.447277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.447285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.273 [2024-07-25 12:07:33.447291] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447297] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447310] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447316] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447357] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447371] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447377] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447383] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447390] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447396] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447445] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447451] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447463] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447476] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447482] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447496] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447502] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447509] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447515] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447521] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447582] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447588] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447606] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447636] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447648] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.447852] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1479300 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.448816] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477640 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.449351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477b00 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450200] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450223] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450230] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450237] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450248] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450271] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450300] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450306] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450312] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.274 [2024-07-25 12:07:33.450340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450387] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450393] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450399] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450405] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450411] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450417] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450423] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450429] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450435] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450440] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450446] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450452] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450457] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450464] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450470] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450481] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450529] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450535] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450541] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450547] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450553] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450558] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450564] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450570] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450576] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450581] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.450587] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1477fe0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451206] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451220] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451244] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451250] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451256] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451262] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451267] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451273] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451279] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451285] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451290] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451296] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451302] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451311] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451317] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451323] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451339] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451351] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451356] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451368] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451374] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451380] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451385] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451391] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451421] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451433] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451439] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451444] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451450] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.275 [2024-07-25 12:07:33.451455] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451461] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451469] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451475] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451480] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451487] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451493] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451498] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451504] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451510] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451516] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451527] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451533] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451538] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451544] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451556] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451562] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451573] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.451579] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x14784a0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.452333] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1664230 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.452346] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1664230 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.453132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x16646f0 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454020] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454032] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454039] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454052] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454060] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454066] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454072] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454078] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454084] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454091] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454097] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454103] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454109] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454150] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454156] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454162] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454185] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454191] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454196] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454202] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454208] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454214] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454226] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454233] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454240] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454246] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454252] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454258] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454263] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454269] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454275] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454281] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454286] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454292] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454298] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454304] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454326] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454332] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454338] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454344] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454350] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454355] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454361] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454367] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.276 [2024-07-25 12:07:33.454372] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.277 [2024-07-25 12:07:33.454379] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.277 [2024-07-25 12:07:33.454386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.277 [2024-07-25 12:07:33.454392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.277 [2024-07-25 12:07:33.454397] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.277 [2024-07-25 12:07:33.454403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1478e40 is same with the state(5) to be set 00:21:46.277 [2024-07-25 12:07:33.457857] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.457889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.457898] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.457906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.457913] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.457919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.457927] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.457933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.457940] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1154bc0 is same with the state(5) to be set 00:21:46.277 [2024-07-25 12:07:33.457972] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.457980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.457988] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.457994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458002] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458016] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458029] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112aa70 is same with the state(5) to be set 00:21:46.277 [2024-07-25 12:07:33.458059] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458074] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458092] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458105] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458118] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114df60 is same with the state(5) to be set 00:21:46.277 [2024-07-25 12:07:33.458142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458157] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458171] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458185] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458197] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc1b90 is same with the state(5) to be set 00:21:46.277 [2024-07-25 12:07:33.458221] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458238] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458251] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458265] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458278] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x115d910 is same with the state(5) to be set 00:21:46.277 [2024-07-25 12:07:33.458298] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458316] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458329] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458343] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458356] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae0340 is same with the state(5) to be set 00:21:46.277 [2024-07-25 12:07:33.458377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458392] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458406] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.277 [2024-07-25 12:07:33.458412] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.277 [2024-07-25 12:07:33.458419] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.278 [2024-07-25 12:07:33.458425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458432] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbdf30 is same with the state(5) to be set 00:21:46.278 [2024-07-25 12:07:33.458453] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.278 [2024-07-25 12:07:33.458461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.278 [2024-07-25 12:07:33.458474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458481] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.278 [2024-07-25 12:07:33.458488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458495] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.278 [2024-07-25 12:07:33.458502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458508] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91c70 is same with the state(5) to be set 00:21:46.278 [2024-07-25 12:07:33.458531] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.278 [2024-07-25 12:07:33.458539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458546] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.278 [2024-07-25 12:07:33.458553] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458560] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.278 [2024-07-25 12:07:33.458566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.278 [2024-07-25 12:07:33.458579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458586] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112b840 is same with the state(5) to be set 00:21:46.278 [2024-07-25 12:07:33.458608] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.278 [2024-07-25 12:07:33.458616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.278 [2024-07-25 12:07:33.458629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458636] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.278 [2024-07-25 12:07:33.458643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:21:46.278 [2024-07-25 12:07:33.458656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458662] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1139700 is same with the state(5) to be set 00:21:46.278 [2024-07-25 12:07:33.458798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.458811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.458831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.458847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.458862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.458880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.458895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.458910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.458926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.458941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.458956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.458970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.458984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.458992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.458999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.459006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.459013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.459021] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.459028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.459036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.459048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.459057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.459065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.459074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.459081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.459089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.459096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.459104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.459111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.459120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.459127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.459135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.459141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.459150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.459156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.459164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.278 [2024-07-25 12:07:33.459170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.278 [2024-07-25 12:07:33.459179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459238] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459495] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.279 [2024-07-25 12:07:33.459750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.279 [2024-07-25 12:07:33.459756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.459823] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10d3920 was disconnected and freed. reset controller. 00:21:46.280 [2024-07-25 12:07:33.479286] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1154bc0 (9): Bad file descriptor 00:21:46.280 [2024-07-25 12:07:33.479343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112aa70 (9): Bad file descriptor 00:21:46.280 [2024-07-25 12:07:33.479355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114df60 (9): Bad file descriptor 00:21:46.280 [2024-07-25 12:07:33.479367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc1b90 (9): Bad file descriptor 00:21:46.280 [2024-07-25 12:07:33.479385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x115d910 (9): Bad file descriptor 00:21:46.280 [2024-07-25 12:07:33.479397] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae0340 (9): Bad file descriptor 00:21:46.280 [2024-07-25 12:07:33.479410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbdf30 (9): Bad file descriptor 00:21:46.280 [2024-07-25 12:07:33.479424] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf91c70 (9): Bad file descriptor 00:21:46.280 [2024-07-25 12:07:33.479437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112b840 (9): Bad file descriptor 00:21:46.280 [2024-07-25 12:07:33.479448] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1139700 (9): Bad file descriptor 00:21:46.280 [2024-07-25 12:07:33.480530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480938] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.480989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.480995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.481003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.481009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.481018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.481024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.280 [2024-07-25 12:07:33.481032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.280 [2024-07-25 12:07:33.481039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481307] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481314] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481410] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481431] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481565] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x10d4c80 was disconnected and freed. reset controller. 00:21:46.281 [2024-07-25 12:07:33.481750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481805] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.281 [2024-07-25 12:07:33.481855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.281 [2024-07-25 12:07:33.481863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.481870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.481881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.481888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.481896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.481902] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.481910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.481917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.481924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.481931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.481939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.481945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.481953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.481959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.481967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.481974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.481981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.481988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.481996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.282 [2024-07-25 12:07:33.482383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.282 [2024-07-25 12:07:33.482390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482552] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482566] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482627] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482665] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482686] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482700] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482761] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0xf8d8c0 was disconnected and freed. reset controller. 00:21:46.283 [2024-07-25 12:07:33.482828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482882] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.482988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.482996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.483003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.483011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.483018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.483026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.483032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.483040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.483051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.483059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.483065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.483073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.283 [2024-07-25 12:07:33.483080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.283 [2024-07-25 12:07:33.483088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483134] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483141] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483258] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483606] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483621] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.284 [2024-07-25 12:07:33.483657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.284 [2024-07-25 12:07:33.483667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.483674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.483682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.483688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.483696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.483703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.483711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.483717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.483725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.483731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.483740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.483746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.483754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.483760] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.483768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.483774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.483832] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x18bf2d0 was disconnected and freed. reset controller. 00:21:46.285 [2024-07-25 12:07:33.483896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:32768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.483903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.483912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:32896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.483919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.483929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:33024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.483935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.483943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:33152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.483950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.483958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:33280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.483966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.483974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:33408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.483980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.483988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:33536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.483995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.484003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:33664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.484009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:33792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:33920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:34048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:34176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:34304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:34432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:34560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:34688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491249] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:34816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491266] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:34944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491281] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:35072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491296] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:35200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491309] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:35328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:35456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:35584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:35712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:35840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:35968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:36096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:36224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:36352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:36480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.285 [2024-07-25 12:07:33.491453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:36608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.285 [2024-07-25 12:07:33.491460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:36736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:36864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:36992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:37120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:37248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:37376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:37504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:37632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:37760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:37888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:38016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:38144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:38272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:38400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:38528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:38656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:38784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:38912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:39040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:39168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:39296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:39424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:39552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491793] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:39680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:39808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:39936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:40064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491853] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:40192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:40320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:40448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:40576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:40704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:40832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.491939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.491999] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a66d10 was disconnected and freed. reset controller. 00:21:46.286 [2024-07-25 12:07:33.492090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.286 [2024-07-25 12:07:33.492099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.286 [2024-07-25 12:07:33.492110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492397] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492587] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492632] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.287 [2024-07-25 12:07:33.492736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.287 [2024-07-25 12:07:33.492744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.492750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.492758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.492765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.492773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.492780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.492787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.492794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.492801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.492810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.492818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.492825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.492833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.492840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.492848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.492854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.492862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.492871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.492879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.492885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.492893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.492901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.492922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.492932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.492943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.492952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.492963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.492972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.492982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.492991] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.493002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.493011] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.493022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.493030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.493041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.493055] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.493067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.493075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.493086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.493095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.497222] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x104e060 was disconnected and freed. reset controller. 00:21:46.288 [2024-07-25 12:07:33.497258] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:46.288 [2024-07-25 12:07:33.497332] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.288 [2024-07-25 12:07:33.497347] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.288 [2024-07-25 12:07:33.497359] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.288 [2024-07-25 12:07:33.497387] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.288 [2024-07-25 12:07:33.497414] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.288 [2024-07-25 12:07:33.504384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.288 [2024-07-25 12:07:33.504431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115d910 with addr=10.0.0.2, port=4420 00:21:46.288 [2024-07-25 12:07:33.504444] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x115d910 is same with the state(5) to be set 00:21:46.288 [2024-07-25 12:07:33.504941] nvme_tcp.c:1241:nvme_tcp_pdu_ch_handle: *ERROR*: Unexpected PDU type 0x00 00:21:46.288 [2024-07-25 12:07:33.505103] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode3] resetting controller 00:21:46.288 [2024-07-25 12:07:33.505145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x115d910 (9): Bad file descriptor 00:21:46.288 [2024-07-25 12:07:33.505195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.505210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.505225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.505235] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.505247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.505257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.505269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.505278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.505289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.505299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.505310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.505320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.505331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.505340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.505351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.505361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.505377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.505386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.505398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.505407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.505419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.505428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.505439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.505448] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.505459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.505469] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.505480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.505489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.505500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.505509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.505521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.505530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.288 [2024-07-25 12:07:33.505541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.288 [2024-07-25 12:07:33.505550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505591] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505854] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505937] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.505987] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.505998] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.289 [2024-07-25 12:07:33.506468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.289 [2024-07-25 12:07:33.506479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.506488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.506500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.506509] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.506520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.506530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:8192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:8320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:8448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:8576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:8704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:8832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:8960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:9088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:9216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:9344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:9472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:9600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:9728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:9856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508579] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:9984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:10240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:10368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:10496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:10624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:10752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:10880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:11008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:11136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:11264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:11392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:11520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:11648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:11776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:11904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:12160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508954] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:12288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:12416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.508984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.508995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.509004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.509016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:12672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.509025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.509036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:12800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.509049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.509061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.509070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.290 [2024-07-25 12:07:33.509082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:13056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.290 [2024-07-25 12:07:33.509091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:13184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:13312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:13440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:13568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:13696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509209] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:13824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:13952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:14080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:14208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:14336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:14464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509332] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:14592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:14720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:14848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:14976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:15104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:15232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:15360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509467] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:15488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:15616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509508] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:15744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:16000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.509602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:16256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.509611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:24576 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511030] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:24704 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:24832 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511092] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:24960 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511101] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:25088 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:25216 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:25344 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:25472 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:25600 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:25728 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:25856 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:25984 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:26112 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:26240 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511311] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:26368 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:26496 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511364] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:26624 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511384] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:26752 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:26880 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511417] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:27008 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.291 [2024-07-25 12:07:33.511437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.291 [2024-07-25 12:07:33.511449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:27136 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:27264 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:27392 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:27520 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:27648 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:27776 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:27904 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:28032 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:28160 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511635] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:28288 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:28416 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:28544 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:28672 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:28800 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:28928 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:29056 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:29184 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:29312 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:29440 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:29568 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:29696 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:29824 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:29952 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:30080 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:30208 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:30336 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.511983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.511994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:30464 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.512003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.512014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:30592 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.512024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.512035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:30720 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.512048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.512060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:30848 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.512069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.512081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:30976 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.512089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.512101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:31104 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.512110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.512122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:31232 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.512131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.512142] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:31360 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.512151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.512162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:31488 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.512172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.512183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:31616 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.512194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.512206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:31744 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.512215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.512227] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:31872 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.512236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.512248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:32000 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.512257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.512268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:32128 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.512277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.512289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:32256 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.512298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.512310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:32384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.512319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.512330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:32512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.512339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.512351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:32640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.292 [2024-07-25 12:07:33.512360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.292 [2024-07-25 12:07:33.512371] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf8c3d0 is same with the state(5) to be set 00:21:46.554 [2024-07-25 12:07:33.514361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:16384 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:16512 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514396] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:16640 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:16768 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:16896 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:17024 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:17152 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:17280 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514488] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:17408 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:17536 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:17664 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:17792 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:17920 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:18048 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:18176 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:18432 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:18560 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:18688 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18816 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:18944 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:19072 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:19200 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.554 [2024-07-25 12:07:33.514714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.554 [2024-07-25 12:07:33.514722] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:19328 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.514738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:19456 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.514752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:19584 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.514766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19712 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.514781] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:19840 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.514796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:19968 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.514812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:20096 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.514827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:20224 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.514841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:20352 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.514856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:20480 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.514870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:20608 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.514885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:20736 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514891] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.514899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:20864 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.514914] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:20992 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.514928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:21120 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.514943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:21248 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.514959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:21376 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.514973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21504 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.514989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:21632 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.514995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:21760 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:21888 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515025] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:22016 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:22144 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:22272 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:22400 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:22528 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:22656 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:22784 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:22912 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:23040 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:23168 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:23296 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:23424 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:23552 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515221] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:23680 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:23808 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515258] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:23936 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:24064 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515280] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:24192 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.555 [2024-07-25 12:07:33.515295] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.555 [2024-07-25 12:07:33.515303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:24320 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.556 [2024-07-25 12:07:33.515310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-07-25 12:07:33.515318] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:24448 len:128 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:21:46.556 [2024-07-25 12:07:33.515324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:21:46.556 [2024-07-25 12:07:33.517202] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode6] resetting controller 00:21:46.556 [2024-07-25 12:07:33.517225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode7] resetting controller 00:21:46.556 [2024-07-25 12:07:33.517236] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode8] resetting controller 00:21:46.556 [2024-07-25 12:07:33.517244] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode10] resetting controller 00:21:46.556 [2024-07-25 12:07:33.517252] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:21:46.556 [2024-07-25 12:07:33.517823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.556 [2024-07-25 12:07:33.517838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfbdf30 with addr=10.0.0.2, port=4420 00:21:46.556 [2024-07-25 12:07:33.517846] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfbdf30 is same with the state(5) to be set 00:21:46.556 [2024-07-25 12:07:33.517854] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:46.556 [2024-07-25 12:07:33.517860] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:46.556 [2024-07-25 12:07:33.517867] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:46.556 [2024-07-25 12:07:33.517890] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.556 [2024-07-25 12:07:33.517901] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.556 [2024-07-25 12:07:33.517912] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.556 [2024-07-25 12:07:33.517922] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.556 [2024-07-25 12:07:33.517946] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfbdf30 (9): Bad file descriptor 00:21:46.556 [2024-07-25 12:07:33.518037] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode4] resetting controller 00:21:46.556 [2024-07-25 12:07:33.518053] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode5] resetting controller 00:21:46.556 task offset: 21376 on job bdev=Nvme2n1 fails 00:21:46.556 00:21:46.556 Latency(us) 00:21:46.556 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:46.556 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.556 Job: Nvme1n1 ended in about 0.92 seconds with error 00:21:46.556 Verification LBA range: start 0x0 length 0x400 00:21:46.556 Nvme1n1 : 0.92 209.59 13.10 69.86 0.00 226677.54 36016.31 209715.20 00:21:46.556 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.556 Job: Nvme2n1 ended in about 0.89 seconds with error 00:21:46.556 Verification LBA range: start 0x0 length 0x400 00:21:46.556 Nvme2n1 : 0.89 143.98 9.00 71.99 0.00 288013.21 23365.01 308190.16 00:21:46.556 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.556 Job: Nvme3n1 ended in about 0.91 seconds with error 00:21:46.556 Verification LBA range: start 0x0 length 0x400 00:21:46.556 Nvme3n1 : 0.91 141.12 8.82 70.56 0.00 288662.34 36244.26 299072.11 00:21:46.556 Job: Nvme4n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.556 Job: Nvme4n1 ended in about 0.92 seconds with error 00:21:46.556 Verification LBA range: start 0x0 length 0x400 00:21:46.556 Nvme4n1 : 0.92 69.63 4.35 69.63 0.00 431405.86 37611.97 379310.97 00:21:46.556 Job: Nvme5n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.556 Job: Nvme5n1 ended in about 0.92 seconds with error 00:21:46.556 Verification LBA range: start 0x0 length 0x400 00:21:46.556 Nvme5n1 : 0.92 208.27 13.02 69.42 0.00 212240.25 20059.71 196038.12 00:21:46.556 Job: Nvme6n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.556 Job: Nvme6n1 ended in about 0.91 seconds with error 00:21:46.556 Verification LBA range: start 0x0 length 0x400 00:21:46.556 Nvme6n1 : 0.91 211.37 13.21 70.46 0.00 204953.60 22567.18 248011.02 00:21:46.556 Job: Nvme7n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.556 Job: Nvme7n1 ended in about 0.91 seconds with error 00:21:46.556 Verification LBA range: start 0x0 length 0x400 00:21:46.556 Nvme7n1 : 0.91 281.45 17.59 70.36 0.00 160981.35 20287.67 222480.47 00:21:46.556 Job: Nvme8n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.556 Job: Nvme8n1 ended in about 0.91 seconds with error 00:21:46.556 Verification LBA range: start 0x0 length 0x400 00:21:46.556 Nvme8n1 : 0.91 281.08 17.57 70.27 0.00 158044.34 20629.59 200597.15 00:21:46.556 Job: Nvme9n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.556 Job: Nvme9n1 ended in about 0.92 seconds with error 00:21:46.556 Verification LBA range: start 0x0 length 0x400 00:21:46.556 Nvme9n1 : 0.92 138.42 8.65 69.21 0.00 262916.30 22567.18 268070.73 00:21:46.556 Job: Nvme10n1 (Core Mask 0x1, workload: verify, depth: 64, IO size: 65536) 00:21:46.556 Job: Nvme10n1 ended in about 0.91 seconds with error 00:21:46.556 Verification LBA range: start 0x0 length 0x400 00:21:46.556 Nvme10n1 : 0.91 138.16 8.63 70.17 0.00 256064.60 23478.98 319131.83 00:21:46.556 =================================================================================================================== 00:21:46.556 Total : 1823.07 113.94 701.94 0.00 231100.41 20059.71 379310.97 00:21:46.556 [2024-07-25 12:07:33.542079] app.c:1053:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:21:46.556 [2024-07-25 12:07:33.542118] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode9] resetting controller 00:21:46.556 [2024-07-25 12:07:33.542132] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.556 [2024-07-25 12:07:33.542744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.556 [2024-07-25 12:07:33.542762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1139700 with addr=10.0.0.2, port=4420 00:21:46.556 [2024-07-25 12:07:33.542771] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1139700 is same with the state(5) to be set 00:21:46.556 [2024-07-25 12:07:33.543293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.556 [2024-07-25 12:07:33.543304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x1154bc0 with addr=10.0.0.2, port=4420 00:21:46.556 [2024-07-25 12:07:33.543311] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1154bc0 is same with the state(5) to be set 00:21:46.556 [2024-07-25 12:07:33.543842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.556 [2024-07-25 12:07:33.543852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x114df60 with addr=10.0.0.2, port=4420 00:21:46.556 [2024-07-25 12:07:33.543858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x114df60 is same with the state(5) to be set 00:21:46.556 [2024-07-25 12:07:33.544366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.556 [2024-07-25 12:07:33.544376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x112aa70 with addr=10.0.0.2, port=4420 00:21:46.556 [2024-07-25 12:07:33.544383] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112aa70 is same with the state(5) to be set 00:21:46.556 [2024-07-25 12:07:33.544887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.556 [2024-07-25 12:07:33.544897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf91c70 with addr=10.0.0.2, port=4420 00:21:46.556 [2024-07-25 12:07:33.544904] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf91c70 is same with the state(5) to be set 00:21:46.556 [2024-07-25 12:07:33.546429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.556 [2024-07-25 12:07:33.546449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xfc1b90 with addr=10.0.0.2, port=4420 00:21:46.556 [2024-07-25 12:07:33.546457] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xfc1b90 is same with the state(5) to be set 00:21:46.556 [2024-07-25 12:07:33.546944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.556 [2024-07-25 12:07:33.546954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xae0340 with addr=10.0.0.2, port=4420 00:21:46.556 [2024-07-25 12:07:33.546967] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xae0340 is same with the state(5) to be set 00:21:46.556 [2024-07-25 12:07:33.547461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.556 [2024-07-25 12:07:33.547472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x112b840 with addr=10.0.0.2, port=4420 00:21:46.556 [2024-07-25 12:07:33.547478] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x112b840 is same with the state(5) to be set 00:21:46.556 [2024-07-25 12:07:33.547492] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1139700 (9): Bad file descriptor 00:21:46.556 [2024-07-25 12:07:33.547504] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1154bc0 (9): Bad file descriptor 00:21:46.556 [2024-07-25 12:07:33.547512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x114df60 (9): Bad file descriptor 00:21:46.556 [2024-07-25 12:07:33.547520] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112aa70 (9): Bad file descriptor 00:21:46.556 [2024-07-25 12:07:33.547528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf91c70 (9): Bad file descriptor 00:21:46.556 [2024-07-25 12:07:33.547535] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode3] Ctrlr is in error state 00:21:46.556 [2024-07-25 12:07:33.547542] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode3] controller reinitialization failed 00:21:46.556 [2024-07-25 12:07:33.547550] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode3] in failed state. 00:21:46.556 [2024-07-25 12:07:33.547590] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.556 [2024-07-25 12:07:33.547603] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.556 [2024-07-25 12:07:33.547612] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.556 [2024-07-25 12:07:33.547621] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.556 [2024-07-25 12:07:33.547631] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.557 [2024-07-25 12:07:33.547640] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:21:46.557 [2024-07-25 12:07:33.547697] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.557 [2024-07-25 12:07:33.547709] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xfc1b90 (9): Bad file descriptor 00:21:46.557 [2024-07-25 12:07:33.547717] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xae0340 (9): Bad file descriptor 00:21:46.557 [2024-07-25 12:07:33.547726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x112b840 (9): Bad file descriptor 00:21:46.557 [2024-07-25 12:07:33.547734] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode6] Ctrlr is in error state 00:21:46.557 [2024-07-25 12:07:33.547739] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode6] controller reinitialization failed 00:21:46.557 [2024-07-25 12:07:33.547745] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode6] in failed state. 00:21:46.557 [2024-07-25 12:07:33.547753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode7] Ctrlr is in error state 00:21:46.557 [2024-07-25 12:07:33.547758] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode7] controller reinitialization failed 00:21:46.557 [2024-07-25 12:07:33.547765] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode7] in failed state. 00:21:46.557 [2024-07-25 12:07:33.547775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode8] Ctrlr is in error state 00:21:46.557 [2024-07-25 12:07:33.547784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode8] controller reinitialization failed 00:21:46.557 [2024-07-25 12:07:33.547789] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode8] in failed state. 00:21:46.557 [2024-07-25 12:07:33.547798] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode10] Ctrlr is in error state 00:21:46.557 [2024-07-25 12:07:33.547803] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode10] controller reinitialization failed 00:21:46.557 [2024-07-25 12:07:33.547809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode10] in failed state. 00:21:46.557 [2024-07-25 12:07:33.547818] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:21:46.557 [2024-07-25 12:07:33.547824] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:21:46.557 [2024-07-25 12:07:33.547829] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:21:46.557 [2024-07-25 12:07:33.547882] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode2] resetting controller 00:21:46.557 [2024-07-25 12:07:33.547892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.557 [2024-07-25 12:07:33.547898] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.557 [2024-07-25 12:07:33.547902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.557 [2024-07-25 12:07:33.547908] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.557 [2024-07-25 12:07:33.547914] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.557 [2024-07-25 12:07:33.547925] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode4] Ctrlr is in error state 00:21:46.557 [2024-07-25 12:07:33.547932] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode4] controller reinitialization failed 00:21:46.557 [2024-07-25 12:07:33.547938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode4] in failed state. 00:21:46.557 [2024-07-25 12:07:33.547946] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode5] Ctrlr is in error state 00:21:46.557 [2024-07-25 12:07:33.547952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode5] controller reinitialization failed 00:21:46.557 [2024-07-25 12:07:33.547958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode5] in failed state. 00:21:46.557 [2024-07-25 12:07:33.547966] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode9] Ctrlr is in error state 00:21:46.557 [2024-07-25 12:07:33.547971] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode9] controller reinitialization failed 00:21:46.557 [2024-07-25 12:07:33.547977] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode9] in failed state. 00:21:46.557 [2024-07-25 12:07:33.547999] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.557 [2024-07-25 12:07:33.548006] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.557 [2024-07-25 12:07:33.548011] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.557 [2024-07-25 12:07:33.548561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:21:46.557 [2024-07-25 12:07:33.548573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x115d910 with addr=10.0.0.2, port=4420 00:21:46.557 [2024-07-25 12:07:33.548580] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x115d910 is same with the state(5) to be set 00:21:46.557 [2024-07-25 12:07:33.548607] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x115d910 (9): Bad file descriptor 00:21:46.557 [2024-07-25 12:07:33.548630] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode2] Ctrlr is in error state 00:21:46.557 [2024-07-25 12:07:33.548639] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode2] controller reinitialization failed 00:21:46.557 [2024-07-25 12:07:33.548645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode2] in failed state. 00:21:46.557 [2024-07-25 12:07:33.548669] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:21:46.816 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@136 -- # nvmfpid= 00:21:46.816 12:07:33 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@139 -- # sleep 1 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # kill -9 391270 00:21:47.755 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/shutdown.sh: line 142: kill: (391270) - No such process 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@142 -- # true 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@144 -- # stoptarget 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@41 -- # rm -f ./local-job0-0-verify.state 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@42 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/bdevperf.conf 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@43 -- # rm -rf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/rpcs.txt 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- target/shutdown.sh@45 -- # nvmftestfini 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@117 -- # sync 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@120 -- # set +e 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:47.755 rmmod nvme_tcp 00:21:47.755 rmmod nvme_fabrics 00:21:47.755 rmmod nvme_keyring 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@124 -- # set -e 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@125 -- # return 0 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:47.755 12:07:34 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.293 12:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:21:50.293 00:21:50.293 real 0m7.824s 00:21:50.293 user 0m19.559s 00:21:50.293 sys 0m1.304s 00:21:50.293 12:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:50.293 12:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown.nvmf_shutdown_tc3 -- common/autotest_common.sh@10 -- # set +x 00:21:50.293 ************************************ 00:21:50.293 END TEST nvmf_shutdown_tc3 00:21:50.293 ************************************ 00:21:50.293 12:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1142 -- # return 0 00:21:50.293 12:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- target/shutdown.sh@151 -- # trap - SIGINT SIGTERM EXIT 00:21:50.293 00:21:50.293 real 0m31.224s 00:21:50.293 user 1m18.851s 00:21:50.293 sys 0m8.349s 00:21:50.293 12:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:50.293 12:07:37 nvmf_tcp.nvmf_target_extra.nvmf_shutdown -- common/autotest_common.sh@10 -- # set +x 00:21:50.293 ************************************ 00:21:50.293 END TEST nvmf_shutdown 00:21:50.293 ************************************ 00:21:50.293 12:07:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1142 -- # return 0 00:21:50.293 12:07:37 nvmf_tcp.nvmf_target_extra -- nvmf/nvmf_target_extra.sh@66 -- # trap - SIGINT SIGTERM EXIT 00:21:50.293 00:21:50.293 real 10m39.453s 00:21:50.293 user 23m53.933s 00:21:50.293 sys 2m57.689s 00:21:50.293 12:07:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@1124 -- # xtrace_disable 00:21:50.293 12:07:37 nvmf_tcp.nvmf_target_extra -- common/autotest_common.sh@10 -- # set +x 00:21:50.293 ************************************ 00:21:50.293 END TEST nvmf_target_extra 00:21:50.293 ************************************ 00:21:50.293 12:07:37 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:21:50.293 12:07:37 nvmf_tcp -- nvmf/nvmf.sh@16 -- # run_test nvmf_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:50.293 12:07:37 nvmf_tcp -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:50.293 12:07:37 nvmf_tcp -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:50.293 12:07:37 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:21:50.293 ************************************ 00:21:50.293 START TEST nvmf_host 00:21:50.293 ************************************ 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/nvmf_host.sh --transport=tcp 00:21:50.293 * Looking for test storage... 00:21:50.293 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # uname -s 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- paths/export.sh@5 -- # export PATH 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@47 -- # : 0 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@11 -- # trap 'exit 1' SIGINT SIGTERM EXIT 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@13 -- # TEST_ARGS=("$@") 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@15 -- # [[ 0 -eq 0 ]] 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@16 -- # run_test nvmf_multicontroller /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:21:50.293 12:07:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:21:50.294 ************************************ 00:21:50.294 START TEST nvmf_multicontroller 00:21:50.294 ************************************ 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multicontroller.sh --transport=tcp 00:21:50.294 * Looking for test storage... 00:21:50.294 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # uname -s 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@5 -- # export PATH 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@47 -- # : 0 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@51 -- # have_pci_nics=0 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@11 -- # MALLOC_BDEV_SIZE=64 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@13 -- # NVMF_HOST_FIRST_PORT=60000 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@14 -- # NVMF_HOST_SECOND_PORT=60001 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@18 -- # '[' tcp == rdma ']' 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@23 -- # nvmftestinit 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@448 -- # prepare_net_devs 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@410 -- # local -g is_hw=no 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@412 -- # remove_spdk_ns 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@285 -- # xtrace_disable 00:21:50.294 12:07:37 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # pci_devs=() 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@291 -- # local -a pci_devs 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # pci_net_devs=() 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # pci_drivers=() 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@293 -- # local -A pci_drivers 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # net_devs=() 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@295 -- # local -ga net_devs 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # e810=() 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@296 -- # local -ga e810 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # x722=() 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@297 -- # local -ga x722 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # mlx=() 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@298 -- # local -ga mlx 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:21:55.571 Found 0000:86:00.0 (0x8086 - 0x159b) 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.571 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:21:55.572 Found 0000:86:00.1 (0x8086 - 0x159b) 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:21:55.572 Found net devices under 0000:86:00.0: cvl_0_0 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@390 -- # [[ up == up ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:21:55.572 Found net devices under 0000:86:00.1: cvl_0_1 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@414 -- # is_hw=yes 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:21:55.572 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:21:55.572 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.169 ms 00:21:55.572 00:21:55.572 --- 10.0.0.2 ping statistics --- 00:21:55.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.572 rtt min/avg/max/mdev = 0.169/0.169/0.169/0.000 ms 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:21:55.572 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:21:55.572 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:21:55.572 00:21:55.572 --- 10.0.0.1 ping statistics --- 00:21:55.572 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:21:55.572 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@422 -- # return 0 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@25 -- # nvmfappstart -m 0xE 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@722 -- # xtrace_disable 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@481 -- # nvmfpid=395356 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@482 -- # waitforlisten 395356 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 395356 ']' 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:21:55.572 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:55.572 12:07:42 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:55.572 [2024-07-25 12:07:42.701554] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:21:55.572 [2024-07-25 12:07:42.701599] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:21:55.572 EAL: No free 2048 kB hugepages reported on node 1 00:21:55.572 [2024-07-25 12:07:42.758825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:21:55.832 [2024-07-25 12:07:42.841087] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:21:55.832 [2024-07-25 12:07:42.841119] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:21:55.832 [2024-07-25 12:07:42.841126] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:21:55.832 [2024-07-25 12:07:42.841132] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:21:55.832 [2024-07-25 12:07:42.841137] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:21:55.832 [2024-07-25 12:07:42.841232] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:21:55.832 [2024-07-25 12:07:42.841336] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:21:55.832 [2024-07-25 12:07:42.841338] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@728 -- # xtrace_disable 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@27 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:56.401 [2024-07-25 12:07:43.553570] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@29 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:56.401 Malloc0 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@30 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:56.401 [2024-07-25 12:07:43.614392] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:56.401 [2024-07-25 12:07:43.622363] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@36 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc1 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:56.401 Malloc1 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@37 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 -a -s SPDK00000000000002 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.401 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@38 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 Malloc1 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@40 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@41 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4421 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@44 -- # bdevperf_pid=395594 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@46 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; pap "$testdir/try.txt"; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w write -t 1 -f 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@47 -- # waitforlisten 395594 /var/tmp/bdevperf.sock 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@829 -- # '[' -z 395594 ']' 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@834 -- # local max_retries=100 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:21:56.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@838 -- # xtrace_disable 00:21:56.661 12:07:43 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:57.665 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@862 -- # return 0 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@50 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:57.666 NVMe0n1 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@54 -- # grep -c NVMe 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.666 1 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@60 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -q nqn.2021-09-7.io.spdk:00001 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:57.666 request: 00:21:57.666 { 00:21:57.666 "name": "NVMe0", 00:21:57.666 "trtype": "tcp", 00:21:57.666 "traddr": "10.0.0.2", 00:21:57.666 "adrfam": "ipv4", 00:21:57.666 "trsvcid": "4420", 00:21:57.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.666 "hostnqn": "nqn.2021-09-7.io.spdk:00001", 00:21:57.666 "hostaddr": "10.0.0.2", 00:21:57.666 "hostsvcid": "60000", 00:21:57.666 "prchk_reftag": false, 00:21:57.666 "prchk_guard": false, 00:21:57.666 "hdgst": false, 00:21:57.666 "ddgst": false, 00:21:57.666 "method": "bdev_nvme_attach_controller", 00:21:57.666 "req_id": 1 00:21:57.666 } 00:21:57.666 Got JSON-RPC error response 00:21:57.666 response: 00:21:57.666 { 00:21:57.666 "code": -114, 00:21:57.666 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:57.666 } 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@65 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode2 -i 10.0.0.2 -c 60000 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:57.666 request: 00:21:57.666 { 00:21:57.666 "name": "NVMe0", 00:21:57.666 "trtype": "tcp", 00:21:57.666 "traddr": "10.0.0.2", 00:21:57.666 "adrfam": "ipv4", 00:21:57.666 "trsvcid": "4420", 00:21:57.666 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:21:57.666 "hostaddr": "10.0.0.2", 00:21:57.666 "hostsvcid": "60000", 00:21:57.666 "prchk_reftag": false, 00:21:57.666 "prchk_guard": false, 00:21:57.666 "hdgst": false, 00:21:57.666 "ddgst": false, 00:21:57.666 "method": "bdev_nvme_attach_controller", 00:21:57.666 "req_id": 1 00:21:57.666 } 00:21:57.666 Got JSON-RPC error response 00:21:57.666 response: 00:21:57.666 { 00:21:57.666 "code": -114, 00:21:57.666 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:57.666 } 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@69 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x disable 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:57.666 request: 00:21:57.666 { 00:21:57.666 "name": "NVMe0", 00:21:57.666 "trtype": "tcp", 00:21:57.666 "traddr": "10.0.0.2", 00:21:57.666 "adrfam": "ipv4", 00:21:57.666 "trsvcid": "4420", 00:21:57.666 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.666 "hostaddr": "10.0.0.2", 00:21:57.666 "hostsvcid": "60000", 00:21:57.666 "prchk_reftag": false, 00:21:57.666 "prchk_guard": false, 00:21:57.666 "hdgst": false, 00:21:57.666 "ddgst": false, 00:21:57.666 "multipath": "disable", 00:21:57.666 "method": "bdev_nvme_attach_controller", 00:21:57.666 "req_id": 1 00:21:57.666 } 00:21:57.666 Got JSON-RPC error response 00:21:57.666 response: 00:21:57.666 { 00:21:57.666 "code": -114, 00:21:57.666 "message": "A controller named NVMe0 already exists and multipath is disabled\n" 00:21:57.666 } 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@74 -- # NOT rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@648 -- # local es=0 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 -x failover 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.666 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:57.666 request: 00:21:57.666 { 00:21:57.666 "name": "NVMe0", 00:21:57.666 "trtype": "tcp", 00:21:57.666 "traddr": "10.0.0.2", 00:21:57.667 "adrfam": "ipv4", 00:21:57.667 "trsvcid": "4420", 00:21:57.667 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:21:57.667 "hostaddr": "10.0.0.2", 00:21:57.667 "hostsvcid": "60000", 00:21:57.667 "prchk_reftag": false, 00:21:57.667 "prchk_guard": false, 00:21:57.667 "hdgst": false, 00:21:57.667 "ddgst": false, 00:21:57.667 "multipath": "failover", 00:21:57.667 "method": "bdev_nvme_attach_controller", 00:21:57.667 "req_id": 1 00:21:57.667 } 00:21:57.667 Got JSON-RPC error response 00:21:57.667 response: 00:21:57.667 { 00:21:57.667 "code": -114, 00:21:57.667 "message": "A controller named NVMe0 already exists with the specified network path\n" 00:21:57.667 } 00:21:57.667 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:21:57.667 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@651 -- # es=1 00:21:57.667 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:21:57.667 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:21:57.667 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:21:57.667 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@79 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:57.667 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.667 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:57.926 00:21:57.926 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.926 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@83 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:21:57.926 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.926 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:57.926 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.926 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@87 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe1 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -i 10.0.0.2 -c 60000 00:21:57.926 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.926 12:07:44 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:57.926 00:21:57.926 12:07:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.926 12:07:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:21:57.926 12:07:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # grep -c NVMe 00:21:57.926 12:07:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:57.926 12:07:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:57.926 12:07:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:57.926 12:07:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@90 -- # '[' 2 '!=' 2 ']' 00:21:57.926 12:07:45 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:21:59.306 0 00:21:59.306 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@98 -- # rpc_cmd -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe1 00:21:59.306 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@100 -- # killprocess 395594 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 395594 ']' 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 395594 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 395594 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 395594' 00:21:59.307 killing process with pid 395594 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 395594 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 395594 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@102 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@103 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@559 -- # xtrace_disable 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@105 -- # trap - SIGINT SIGTERM EXIT 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@107 -- # pap /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # find /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt -type f 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1611 -- # sort -u 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1613 -- # cat 00:21:59.307 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:59.307 [2024-07-25 12:07:43.724997] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:21:59.307 [2024-07-25 12:07:43.725058] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid395594 ] 00:21:59.307 EAL: No free 2048 kB hugepages reported on node 1 00:21:59.307 [2024-07-25 12:07:43.779644] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:59.307 [2024-07-25 12:07:43.859405] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:21:59.307 [2024-07-25 12:07:45.041447] bdev.c:4633:bdev_name_add: *ERROR*: Bdev name 34448de3-f7b3-4565-a6e9-c624c2357075 already exists 00:21:59.307 [2024-07-25 12:07:45.041477] bdev.c:7755:bdev_register: *ERROR*: Unable to add uuid:34448de3-f7b3-4565-a6e9-c624c2357075 alias for bdev NVMe1n1 00:21:59.307 [2024-07-25 12:07:45.041485] bdev_nvme.c:4318:nvme_bdev_create: *ERROR*: spdk_bdev_register() failed 00:21:59.307 Running I/O for 1 seconds... 00:21:59.307 00:21:59.307 Latency(us) 00:21:59.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.307 Job: NVMe0n1 (Core Mask 0x1, workload: write, depth: 128, IO size: 4096) 00:21:59.307 NVMe0n1 : 1.01 22506.35 87.92 0.00 0.00 5668.72 3405.02 28379.94 00:21:59.307 =================================================================================================================== 00:21:59.307 Total : 22506.35 87.92 0.00 0.00 5668.72 3405.02 28379.94 00:21:59.307 Received shutdown signal, test time was about 1.000000 seconds 00:21:59.307 00:21:59.307 Latency(us) 00:21:59.307 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:21:59.307 =================================================================================================================== 00:21:59.307 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:21:59.307 --- /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt --- 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1618 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1612 -- # read -r file 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- host/multicontroller.sh@108 -- # nvmftestfini 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@488 -- # nvmfcleanup 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@117 -- # sync 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@120 -- # set +e 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@121 -- # for i in {1..20} 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:21:59.307 rmmod nvme_tcp 00:21:59.307 rmmod nvme_fabrics 00:21:59.307 rmmod nvme_keyring 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@124 -- # set -e 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@125 -- # return 0 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@489 -- # '[' -n 395356 ']' 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@490 -- # killprocess 395356 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@948 -- # '[' -z 395356 ']' 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@952 -- # kill -0 395356 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # uname 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 395356 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@966 -- # echo 'killing process with pid 395356' 00:21:59.307 killing process with pid 395356 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@967 -- # kill 395356 00:21:59.307 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@972 -- # wait 395356 00:21:59.567 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:21:59.567 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:21:59.567 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:21:59.567 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:21:59.567 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@278 -- # remove_spdk_ns 00:21:59.567 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:21:59.567 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:21:59.567 12:07:46 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.107 12:07:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:02.107 00:22:02.107 real 0m11.530s 00:22:02.107 user 0m16.240s 00:22:02.107 sys 0m4.714s 00:22:02.107 12:07:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:02.107 12:07:48 nvmf_tcp.nvmf_host.nvmf_multicontroller -- common/autotest_common.sh@10 -- # set +x 00:22:02.107 ************************************ 00:22:02.107 END TEST nvmf_multicontroller 00:22:02.107 ************************************ 00:22:02.107 12:07:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:22:02.107 12:07:48 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@17 -- # run_test nvmf_aer /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:02.107 12:07:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:02.107 12:07:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:02.107 12:07:48 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:02.107 ************************************ 00:22:02.107 START TEST nvmf_aer 00:22:02.107 ************************************ 00:22:02.107 12:07:48 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/aer.sh --transport=tcp 00:22:02.107 * Looking for test storage... 00:22:02.107 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # uname -s 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@5 -- # export PATH 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@47 -- # : 0 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@11 -- # nvmftestinit 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@285 -- # xtrace_disable 00:22:02.107 12:07:49 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # pci_devs=() 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # net_devs=() 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # e810=() 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@296 -- # local -ga e810 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # x722=() 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@297 -- # local -ga x722 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # mlx=() 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@298 -- # local -ga mlx 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:07.397 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:07.397 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:07.397 Found net devices under 0000:86:00.0: cvl_0_0 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:07.397 Found net devices under 0000:86:00.1: cvl_0_1 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:07.397 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@414 -- # is_hw=yes 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:07.398 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:07.398 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.272 ms 00:22:07.398 00:22:07.398 --- 10.0.0.2 ping statistics --- 00:22:07.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.398 rtt min/avg/max/mdev = 0.272/0.272/0.272/0.000 ms 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:07.398 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:07.398 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.383 ms 00:22:07.398 00:22:07.398 --- 10.0.0.1 ping statistics --- 00:22:07.398 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:07.398 rtt min/avg/max/mdev = 0.383/0.383/0.383/0.000 ms 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@422 -- # return 0 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@12 -- # nvmfappstart -m 0xF 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@481 -- # nvmfpid=399584 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@482 -- # waitforlisten 399584 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@829 -- # '[' -z 399584 ']' 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:07.398 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:07.398 12:07:54 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:07.398 [2024-07-25 12:07:54.481561] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:22:07.398 [2024-07-25 12:07:54.481605] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:07.398 EAL: No free 2048 kB hugepages reported on node 1 00:22:07.398 [2024-07-25 12:07:54.536122] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:07.398 [2024-07-25 12:07:54.618234] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:07.398 [2024-07-25 12:07:54.618270] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:07.398 [2024-07-25 12:07:54.618278] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:07.398 [2024-07-25 12:07:54.618284] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:07.398 [2024-07-25 12:07:54.618289] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:07.398 [2024-07-25 12:07:54.618345] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:07.398 [2024-07-25 12:07:54.618364] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:07.398 [2024-07-25 12:07:54.618450] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:07.398 [2024-07-25 12:07:54.618451] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:08.337 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@862 -- # return 0 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@14 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.338 [2024-07-25 12:07:55.335515] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@16 -- # rpc_cmd bdev_malloc_create 64 512 --name Malloc0 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.338 Malloc0 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@17 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 2 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@18 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@19 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.338 [2024-07-25 12:07:55.387072] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@21 -- # rpc_cmd nvmf_get_subsystems 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.338 [ 00:22:08.338 { 00:22:08.338 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:08.338 "subtype": "Discovery", 00:22:08.338 "listen_addresses": [], 00:22:08.338 "allow_any_host": true, 00:22:08.338 "hosts": [] 00:22:08.338 }, 00:22:08.338 { 00:22:08.338 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.338 "subtype": "NVMe", 00:22:08.338 "listen_addresses": [ 00:22:08.338 { 00:22:08.338 "trtype": "TCP", 00:22:08.338 "adrfam": "IPv4", 00:22:08.338 "traddr": "10.0.0.2", 00:22:08.338 "trsvcid": "4420" 00:22:08.338 } 00:22:08.338 ], 00:22:08.338 "allow_any_host": true, 00:22:08.338 "hosts": [], 00:22:08.338 "serial_number": "SPDK00000000000001", 00:22:08.338 "model_number": "SPDK bdev Controller", 00:22:08.338 "max_namespaces": 2, 00:22:08.338 "min_cntlid": 1, 00:22:08.338 "max_cntlid": 65519, 00:22:08.338 "namespaces": [ 00:22:08.338 { 00:22:08.338 "nsid": 1, 00:22:08.338 "bdev_name": "Malloc0", 00:22:08.338 "name": "Malloc0", 00:22:08.338 "nguid": "E193C0CC5CC84CBA85A6BB265892A9D2", 00:22:08.338 "uuid": "e193c0cc-5cc8-4cba-85a6-bb265892a9d2" 00:22:08.338 } 00:22:08.338 ] 00:22:08.338 } 00:22:08.338 ] 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@23 -- # AER_TOUCH_FILE=/tmp/aer_touch_file 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@24 -- # rm -f /tmp/aer_touch_file 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@33 -- # aerpid=399645 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvme/aer/aer -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -n 2 -t /tmp/aer_touch_file 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@36 -- # waitforfile /tmp/aer_touch_file 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1265 -- # local i=0 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 0 -lt 200 ']' 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=1 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:08.338 EAL: No free 2048 kB hugepages reported on node 1 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1267 -- # '[' 1 -lt 200 ']' 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1268 -- # i=2 00:22:08.338 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1269 -- # sleep 0.1 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1266 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1272 -- # '[' '!' -e /tmp/aer_touch_file ']' 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1276 -- # return 0 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@39 -- # rpc_cmd bdev_malloc_create 64 4096 --name Malloc1 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.598 Malloc1 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@40 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 -n 2 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@41 -- # rpc_cmd nvmf_get_subsystems 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.598 [ 00:22:08.598 { 00:22:08.598 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:08.598 "subtype": "Discovery", 00:22:08.598 "listen_addresses": [], 00:22:08.598 "allow_any_host": true, 00:22:08.598 "hosts": [] 00:22:08.598 }, 00:22:08.598 { 00:22:08.598 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:08.598 "subtype": "NVMe", 00:22:08.598 "listen_addresses": [ 00:22:08.598 { 00:22:08.598 "trtype": "TCP", 00:22:08.598 "adrfam": "IPv4", 00:22:08.598 "traddr": "10.0.0.2", 00:22:08.598 "trsvcid": "4420" 00:22:08.598 } 00:22:08.598 ], 00:22:08.598 "allow_any_host": true, 00:22:08.598 "hosts": [], 00:22:08.598 "serial_number": "SPDK00000000000001", 00:22:08.598 "model_number": "SPDK bdev Controller", 00:22:08.598 "max_namespaces": 2, 00:22:08.598 "min_cntlid": 1, 00:22:08.598 "max_cntlid": 65519, 00:22:08.598 "namespaces": [ 00:22:08.598 { 00:22:08.598 "nsid": 1, 00:22:08.598 "bdev_name": "Malloc0", 00:22:08.598 "name": "Malloc0", 00:22:08.598 "nguid": "E193C0CC5CC84CBA85A6BB265892A9D2", 00:22:08.598 "uuid": "e193c0cc-5cc8-4cba-85a6-bb265892a9d2" 00:22:08.598 }, 00:22:08.598 { 00:22:08.598 "nsid": 2, 00:22:08.598 "bdev_name": "Malloc1", 00:22:08.598 "name": "Malloc1", 00:22:08.598 "nguid": "B20A0AFBE9194D16A2F25180D79B44A9", 00:22:08.598 "uuid": "b20a0afb-e919-4d16-a2f2-5180d79b44a9" 00:22:08.598 } 00:22:08.598 ] 00:22:08.598 } 00:22:08.598 ] 00:22:08.598 Asynchronous Event Request test 00:22:08.598 Attaching to 10.0.0.2 00:22:08.598 Attached to 10.0.0.2 00:22:08.598 Registering asynchronous event callbacks... 00:22:08.598 Starting namespace attribute notice tests for all controllers... 00:22:08.598 10.0.0.2: aer_cb for log page 4, aen_event_type: 0x02, aen_event_info: 0x00 00:22:08.598 aer_cb - Changed Namespace 00:22:08.598 Cleaning up... 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@43 -- # wait 399645 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@45 -- # rpc_cmd bdev_malloc_delete Malloc0 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@46 -- # rpc_cmd bdev_malloc_delete Malloc1 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.598 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@47 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@49 -- # trap - SIGINT SIGTERM EXIT 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- host/aer.sh@51 -- # nvmftestfini 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@117 -- # sync 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@120 -- # set +e 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:08.599 rmmod nvme_tcp 00:22:08.599 rmmod nvme_fabrics 00:22:08.599 rmmod nvme_keyring 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@124 -- # set -e 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@125 -- # return 0 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@489 -- # '[' -n 399584 ']' 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@490 -- # killprocess 399584 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@948 -- # '[' -z 399584 ']' 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@952 -- # kill -0 399584 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # uname 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:08.599 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 399584 00:22:08.858 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:08.858 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:08.858 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@966 -- # echo 'killing process with pid 399584' 00:22:08.858 killing process with pid 399584 00:22:08.858 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@967 -- # kill 399584 00:22:08.858 12:07:55 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@972 -- # wait 399584 00:22:08.858 12:07:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:08.858 12:07:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:08.858 12:07:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:08.858 12:07:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:08.858 12:07:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:08.858 12:07:56 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:08.858 12:07:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:08.858 12:07:56 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_aer -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:11.398 00:22:11.398 real 0m9.207s 00:22:11.398 user 0m7.303s 00:22:11.398 sys 0m4.481s 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_aer -- common/autotest_common.sh@10 -- # set +x 00:22:11.398 ************************************ 00:22:11.398 END TEST nvmf_aer 00:22:11.398 ************************************ 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@18 -- # run_test nvmf_async_init /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:11.398 ************************************ 00:22:11.398 START TEST nvmf_async_init 00:22:11.398 ************************************ 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/async_init.sh --transport=tcp 00:22:11.398 * Looking for test storage... 00:22:11.398 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # uname -s 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:11.398 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@5 -- # export PATH 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@47 -- # : 0 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@13 -- # null_bdev_size=1024 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@14 -- # null_block_size=512 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@15 -- # null_bdev=null0 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@16 -- # nvme_bdev=nvme0 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # uuidgen 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # tr -d - 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@20 -- # nguid=f2660ca81932477ca8d7d319e7821593 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@22 -- # nvmftestinit 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@285 -- # xtrace_disable 00:22:11.399 12:07:58 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:16.675 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:16.675 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # pci_devs=() 00:22:16.675 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:16.675 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:16.675 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:16.675 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:16.675 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:16.675 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # net_devs=() 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # e810=() 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@296 -- # local -ga e810 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # x722=() 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@297 -- # local -ga x722 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # mlx=() 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@298 -- # local -ga mlx 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:16.676 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:16.676 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:16.676 Found net devices under 0000:86:00.0: cvl_0_0 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:16.676 Found net devices under 0000:86:00.1: cvl_0_1 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@414 -- # is_hw=yes 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:16.676 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:16.676 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:22:16.676 00:22:16.676 --- 10.0.0.2 ping statistics --- 00:22:16.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.676 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:16.676 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:16.676 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.395 ms 00:22:16.676 00:22:16.676 --- 10.0.0.1 ping statistics --- 00:22:16.676 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:16.676 rtt min/avg/max/mdev = 0.395/0.395/0.395/0.000 ms 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@422 -- # return 0 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@23 -- # nvmfappstart -m 0x1 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@481 -- # nvmfpid=403133 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@482 -- # waitforlisten 403133 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@829 -- # '[' -z 403133 ']' 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:16.676 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:16.677 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:16.677 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:16.677 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:16.677 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:16.677 12:08:03 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x1 00:22:16.677 [2024-07-25 12:08:03.760121] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:22:16.677 [2024-07-25 12:08:03.760164] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:16.677 EAL: No free 2048 kB hugepages reported on node 1 00:22:16.677 [2024-07-25 12:08:03.816864] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:22:16.677 [2024-07-25 12:08:03.895493] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:16.677 [2024-07-25 12:08:03.895525] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:16.677 [2024-07-25 12:08:03.895533] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:16.677 [2024-07-25 12:08:03.895539] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:16.677 [2024-07-25 12:08:03.895545] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:16.677 [2024-07-25 12:08:03.895566] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@862 -- # return 0 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@26 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.614 [2024-07-25 12:08:04.582518] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@27 -- # rpc_cmd bdev_null_create null0 1024 512 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.614 null0 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@28 -- # rpc_cmd bdev_wait_for_examine 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@29 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 -a 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@30 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 -g f2660ca81932477ca8d7d319e7821593 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@31 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.614 [2024-07-25 12:08:04.622697] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@37 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4420 -n nqn.2016-06.io.spdk:cnode0 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.614 nvme0n1 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@41 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.614 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.873 [ 00:22:17.873 { 00:22:17.873 "name": "nvme0n1", 00:22:17.873 "aliases": [ 00:22:17.873 "f2660ca8-1932-477c-a8d7-d319e7821593" 00:22:17.873 ], 00:22:17.873 "product_name": "NVMe disk", 00:22:17.873 "block_size": 512, 00:22:17.873 "num_blocks": 2097152, 00:22:17.873 "uuid": "f2660ca8-1932-477c-a8d7-d319e7821593", 00:22:17.873 "assigned_rate_limits": { 00:22:17.873 "rw_ios_per_sec": 0, 00:22:17.873 "rw_mbytes_per_sec": 0, 00:22:17.873 "r_mbytes_per_sec": 0, 00:22:17.873 "w_mbytes_per_sec": 0 00:22:17.873 }, 00:22:17.873 "claimed": false, 00:22:17.873 "zoned": false, 00:22:17.873 "supported_io_types": { 00:22:17.873 "read": true, 00:22:17.873 "write": true, 00:22:17.873 "unmap": false, 00:22:17.873 "flush": true, 00:22:17.873 "reset": true, 00:22:17.873 "nvme_admin": true, 00:22:17.873 "nvme_io": true, 00:22:17.873 "nvme_io_md": false, 00:22:17.873 "write_zeroes": true, 00:22:17.873 "zcopy": false, 00:22:17.873 "get_zone_info": false, 00:22:17.873 "zone_management": false, 00:22:17.873 "zone_append": false, 00:22:17.873 "compare": true, 00:22:17.873 "compare_and_write": true, 00:22:17.873 "abort": true, 00:22:17.873 "seek_hole": false, 00:22:17.873 "seek_data": false, 00:22:17.873 "copy": true, 00:22:17.874 "nvme_iov_md": false 00:22:17.874 }, 00:22:17.874 "memory_domains": [ 00:22:17.874 { 00:22:17.874 "dma_device_id": "system", 00:22:17.874 "dma_device_type": 1 00:22:17.874 } 00:22:17.874 ], 00:22:17.874 "driver_specific": { 00:22:17.874 "nvme": [ 00:22:17.874 { 00:22:17.874 "trid": { 00:22:17.874 "trtype": "TCP", 00:22:17.874 "adrfam": "IPv4", 00:22:17.874 "traddr": "10.0.0.2", 00:22:17.874 "trsvcid": "4420", 00:22:17.874 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:17.874 }, 00:22:17.874 "ctrlr_data": { 00:22:17.874 "cntlid": 1, 00:22:17.874 "vendor_id": "0x8086", 00:22:17.874 "model_number": "SPDK bdev Controller", 00:22:17.874 "serial_number": "00000000000000000000", 00:22:17.874 "firmware_revision": "24.09", 00:22:17.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:17.874 "oacs": { 00:22:17.874 "security": 0, 00:22:17.874 "format": 0, 00:22:17.874 "firmware": 0, 00:22:17.874 "ns_manage": 0 00:22:17.874 }, 00:22:17.874 "multi_ctrlr": true, 00:22:17.874 "ana_reporting": false 00:22:17.874 }, 00:22:17.874 "vs": { 00:22:17.874 "nvme_version": "1.3" 00:22:17.874 }, 00:22:17.874 "ns_data": { 00:22:17.874 "id": 1, 00:22:17.874 "can_share": true 00:22:17.874 } 00:22:17.874 } 00:22:17.874 ], 00:22:17.874 "mp_policy": "active_passive" 00:22:17.874 } 00:22:17.874 } 00:22:17.874 ] 00:22:17.874 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.874 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@44 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:22:17.874 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.874 12:08:04 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.874 [2024-07-25 12:08:04.871233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:22:17.874 [2024-07-25 12:08:04.871316] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x26c3390 (9): Bad file descriptor 00:22:17.874 [2024-07-25 12:08:05.003132] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:22:17.874 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.874 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@47 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:17.874 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.874 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.874 [ 00:22:17.874 { 00:22:17.874 "name": "nvme0n1", 00:22:17.874 "aliases": [ 00:22:17.874 "f2660ca8-1932-477c-a8d7-d319e7821593" 00:22:17.874 ], 00:22:17.874 "product_name": "NVMe disk", 00:22:17.874 "block_size": 512, 00:22:17.874 "num_blocks": 2097152, 00:22:17.874 "uuid": "f2660ca8-1932-477c-a8d7-d319e7821593", 00:22:17.874 "assigned_rate_limits": { 00:22:17.874 "rw_ios_per_sec": 0, 00:22:17.874 "rw_mbytes_per_sec": 0, 00:22:17.874 "r_mbytes_per_sec": 0, 00:22:17.874 "w_mbytes_per_sec": 0 00:22:17.874 }, 00:22:17.874 "claimed": false, 00:22:17.874 "zoned": false, 00:22:17.874 "supported_io_types": { 00:22:17.874 "read": true, 00:22:17.874 "write": true, 00:22:17.874 "unmap": false, 00:22:17.874 "flush": true, 00:22:17.874 "reset": true, 00:22:17.874 "nvme_admin": true, 00:22:17.874 "nvme_io": true, 00:22:17.874 "nvme_io_md": false, 00:22:17.874 "write_zeroes": true, 00:22:17.874 "zcopy": false, 00:22:17.874 "get_zone_info": false, 00:22:17.874 "zone_management": false, 00:22:17.874 "zone_append": false, 00:22:17.874 "compare": true, 00:22:17.874 "compare_and_write": true, 00:22:17.874 "abort": true, 00:22:17.874 "seek_hole": false, 00:22:17.874 "seek_data": false, 00:22:17.874 "copy": true, 00:22:17.874 "nvme_iov_md": false 00:22:17.874 }, 00:22:17.874 "memory_domains": [ 00:22:17.874 { 00:22:17.874 "dma_device_id": "system", 00:22:17.874 "dma_device_type": 1 00:22:17.874 } 00:22:17.874 ], 00:22:17.874 "driver_specific": { 00:22:17.874 "nvme": [ 00:22:17.874 { 00:22:17.874 "trid": { 00:22:17.874 "trtype": "TCP", 00:22:17.874 "adrfam": "IPv4", 00:22:17.874 "traddr": "10.0.0.2", 00:22:17.874 "trsvcid": "4420", 00:22:17.874 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:17.874 }, 00:22:17.874 "ctrlr_data": { 00:22:17.874 "cntlid": 2, 00:22:17.874 "vendor_id": "0x8086", 00:22:17.874 "model_number": "SPDK bdev Controller", 00:22:17.874 "serial_number": "00000000000000000000", 00:22:17.874 "firmware_revision": "24.09", 00:22:17.874 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:17.874 "oacs": { 00:22:17.874 "security": 0, 00:22:17.874 "format": 0, 00:22:17.874 "firmware": 0, 00:22:17.874 "ns_manage": 0 00:22:17.874 }, 00:22:17.874 "multi_ctrlr": true, 00:22:17.874 "ana_reporting": false 00:22:17.874 }, 00:22:17.874 "vs": { 00:22:17.874 "nvme_version": "1.3" 00:22:17.874 }, 00:22:17.874 "ns_data": { 00:22:17.874 "id": 1, 00:22:17.874 "can_share": true 00:22:17.874 } 00:22:17.874 } 00:22:17.874 ], 00:22:17.874 "mp_policy": "active_passive" 00:22:17.874 } 00:22:17.874 } 00:22:17.874 ] 00:22:17.874 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.874 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@50 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:17.874 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.874 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.874 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # mktemp 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@53 -- # key_path=/tmp/tmp.K0AVOg3oxJ 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@54 -- # echo -n NVMeTLSkey-1:01:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@55 -- # chmod 0600 /tmp/tmp.K0AVOg3oxJ 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@56 -- # rpc_cmd nvmf_subsystem_allow_any_host nqn.2016-06.io.spdk:cnode0 --disable 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@57 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 --secure-channel 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.875 [2024-07-25 12:08:05.051804] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:22:17.875 [2024-07-25 12:08:05.051900] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@59 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.K0AVOg3oxJ 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.875 [2024-07-25 12:08:05.059818] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@65 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 10.0.0.2 -f ipv4 -s 4421 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host1 --psk /tmp/tmp.K0AVOg3oxJ 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:17.875 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:17.875 [2024-07-25 12:08:05.067852] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:22:17.875 [2024-07-25 12:08:05.067886] nvme_tcp.c:2589:nvme_tcp_generate_tls_credentials: *WARNING*: nvme_ctrlr_psk: deprecated feature spdk_nvme_ctrlr_opts.psk to be removed in v24.09 00:22:18.134 nvme0n1 00:22:18.134 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.134 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@69 -- # rpc_cmd bdev_get_bdevs -b nvme0n1 00:22:18.134 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.134 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:18.134 [ 00:22:18.134 { 00:22:18.134 "name": "nvme0n1", 00:22:18.134 "aliases": [ 00:22:18.134 "f2660ca8-1932-477c-a8d7-d319e7821593" 00:22:18.134 ], 00:22:18.135 "product_name": "NVMe disk", 00:22:18.135 "block_size": 512, 00:22:18.135 "num_blocks": 2097152, 00:22:18.135 "uuid": "f2660ca8-1932-477c-a8d7-d319e7821593", 00:22:18.135 "assigned_rate_limits": { 00:22:18.135 "rw_ios_per_sec": 0, 00:22:18.135 "rw_mbytes_per_sec": 0, 00:22:18.135 "r_mbytes_per_sec": 0, 00:22:18.135 "w_mbytes_per_sec": 0 00:22:18.135 }, 00:22:18.135 "claimed": false, 00:22:18.135 "zoned": false, 00:22:18.135 "supported_io_types": { 00:22:18.135 "read": true, 00:22:18.135 "write": true, 00:22:18.135 "unmap": false, 00:22:18.135 "flush": true, 00:22:18.135 "reset": true, 00:22:18.135 "nvme_admin": true, 00:22:18.135 "nvme_io": true, 00:22:18.135 "nvme_io_md": false, 00:22:18.135 "write_zeroes": true, 00:22:18.135 "zcopy": false, 00:22:18.135 "get_zone_info": false, 00:22:18.135 "zone_management": false, 00:22:18.135 "zone_append": false, 00:22:18.135 "compare": true, 00:22:18.135 "compare_and_write": true, 00:22:18.135 "abort": true, 00:22:18.135 "seek_hole": false, 00:22:18.135 "seek_data": false, 00:22:18.135 "copy": true, 00:22:18.135 "nvme_iov_md": false 00:22:18.135 }, 00:22:18.135 "memory_domains": [ 00:22:18.135 { 00:22:18.135 "dma_device_id": "system", 00:22:18.135 "dma_device_type": 1 00:22:18.135 } 00:22:18.135 ], 00:22:18.135 "driver_specific": { 00:22:18.135 "nvme": [ 00:22:18.135 { 00:22:18.135 "trid": { 00:22:18.135 "trtype": "TCP", 00:22:18.135 "adrfam": "IPv4", 00:22:18.135 "traddr": "10.0.0.2", 00:22:18.135 "trsvcid": "4421", 00:22:18.135 "subnqn": "nqn.2016-06.io.spdk:cnode0" 00:22:18.135 }, 00:22:18.135 "ctrlr_data": { 00:22:18.135 "cntlid": 3, 00:22:18.135 "vendor_id": "0x8086", 00:22:18.135 "model_number": "SPDK bdev Controller", 00:22:18.135 "serial_number": "00000000000000000000", 00:22:18.135 "firmware_revision": "24.09", 00:22:18.135 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:22:18.135 "oacs": { 00:22:18.135 "security": 0, 00:22:18.135 "format": 0, 00:22:18.135 "firmware": 0, 00:22:18.135 "ns_manage": 0 00:22:18.135 }, 00:22:18.135 "multi_ctrlr": true, 00:22:18.135 "ana_reporting": false 00:22:18.135 }, 00:22:18.135 "vs": { 00:22:18.135 "nvme_version": "1.3" 00:22:18.135 }, 00:22:18.135 "ns_data": { 00:22:18.135 "id": 1, 00:22:18.135 "can_share": true 00:22:18.135 } 00:22:18.135 } 00:22:18.135 ], 00:22:18.135 "mp_policy": "active_passive" 00:22:18.135 } 00:22:18.135 } 00:22:18.135 ] 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@72 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@75 -- # rm -f /tmp/tmp.K0AVOg3oxJ 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@77 -- # trap - SIGINT SIGTERM EXIT 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- host/async_init.sh@78 -- # nvmftestfini 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@117 -- # sync 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@120 -- # set +e 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:18.135 rmmod nvme_tcp 00:22:18.135 rmmod nvme_fabrics 00:22:18.135 rmmod nvme_keyring 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@124 -- # set -e 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@125 -- # return 0 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@489 -- # '[' -n 403133 ']' 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@490 -- # killprocess 403133 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@948 -- # '[' -z 403133 ']' 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@952 -- # kill -0 403133 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # uname 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 403133 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@966 -- # echo 'killing process with pid 403133' 00:22:18.135 killing process with pid 403133 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@967 -- # kill 403133 00:22:18.135 [2024-07-25 12:08:05.254185] app.c:1024:log_deprecation_hits: *WARNING*: nvme_ctrlr_psk: deprecation 'spdk_nvme_ctrlr_opts.psk' scheduled for removal in v24.09 hit 1 times 00:22:18.135 [2024-07-25 12:08:05.254207] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:22:18.135 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@972 -- # wait 403133 00:22:18.395 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:18.395 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:18.395 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:18.395 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:18.395 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:18.395 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:18.395 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:18.395 12:08:05 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.331 12:08:07 nvmf_tcp.nvmf_host.nvmf_async_init -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:20.331 00:22:20.331 real 0m9.291s 00:22:20.331 user 0m3.324s 00:22:20.331 sys 0m4.416s 00:22:20.331 12:08:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:20.331 12:08:07 nvmf_tcp.nvmf_host.nvmf_async_init -- common/autotest_common.sh@10 -- # set +x 00:22:20.331 ************************************ 00:22:20.331 END TEST nvmf_async_init 00:22:20.331 ************************************ 00:22:20.331 12:08:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:22:20.331 12:08:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@19 -- # run_test dma /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:20.331 12:08:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:20.331 12:08:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:20.331 12:08:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.331 ************************************ 00:22:20.331 START TEST dma 00:22:20.331 ************************************ 00:22:20.331 12:08:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/dma.sh --transport=tcp 00:22:20.591 * Looking for test storage... 00:22:20.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- host/dma.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # uname -s 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@5 -- # export PATH 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@47 -- # : 0 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- host/dma.sh@12 -- # '[' tcp '!=' rdma ']' 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- host/dma.sh@13 -- # exit 0 00:22:20.591 00:22:20.591 real 0m0.108s 00:22:20.591 user 0m0.055s 00:22:20.591 sys 0m0.060s 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.dma -- common/autotest_common.sh@10 -- # set +x 00:22:20.591 ************************************ 00:22:20.591 END TEST dma 00:22:20.591 ************************************ 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@22 -- # run_test nvmf_identify /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:20.591 ************************************ 00:22:20.591 START TEST nvmf_identify 00:22:20.591 ************************************ 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify.sh --transport=tcp 00:22:20.591 * Looking for test storage... 00:22:20.591 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # uname -s 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:20.591 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@5 -- # export PATH 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@47 -- # : 0 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@11 -- # MALLOC_BDEV_SIZE=64 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@14 -- # nvmftestinit 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@285 -- # xtrace_disable 00:22:20.592 12:08:07 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:25.874 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:25.874 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # pci_devs=() 00:22:25.874 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:25.874 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:25.874 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:25.874 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:25.874 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:25.874 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # net_devs=() 00:22:25.874 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:25.874 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # e810=() 00:22:25.874 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@296 -- # local -ga e810 00:22:25.874 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # x722=() 00:22:25.874 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@297 -- # local -ga x722 00:22:25.874 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # mlx=() 00:22:25.874 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@298 -- # local -ga mlx 00:22:25.874 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:25.874 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:25.874 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:25.875 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:25.875 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:25.875 Found net devices under 0000:86:00.0: cvl_0_0 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:25.875 Found net devices under 0000:86:00.1: cvl_0_1 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@414 -- # is_hw=yes 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:25.875 12:08:12 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:25.875 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:25.875 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:25.875 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:25.875 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:26.135 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:26.135 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.276 ms 00:22:26.135 00:22:26.135 --- 10.0.0.2 ping statistics --- 00:22:26.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.135 rtt min/avg/max/mdev = 0.276/0.276/0.276/0.000 ms 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:26.135 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:26.135 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:22:26.135 00:22:26.135 --- 10.0.0.1 ping statistics --- 00:22:26.135 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:26.135 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@422 -- # return 0 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@16 -- # timing_enter start_nvmf_tgt 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@18 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@19 -- # nvmfpid=406936 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@21 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@23 -- # waitforlisten 406936 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@829 -- # '[' -z 406936 ']' 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:26.135 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:26.135 12:08:13 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:26.135 [2024-07-25 12:08:13.253456] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:22:26.135 [2024-07-25 12:08:13.253496] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:26.135 EAL: No free 2048 kB hugepages reported on node 1 00:22:26.135 [2024-07-25 12:08:13.307527] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:26.394 [2024-07-25 12:08:13.388844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:26.394 [2024-07-25 12:08:13.388880] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:26.394 [2024-07-25 12:08:13.388888] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:26.394 [2024-07-25 12:08:13.388895] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:26.394 [2024-07-25 12:08:13.388901] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:26.394 [2024-07-25 12:08:13.388949] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:26.394 [2024-07-25 12:08:13.388967] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:26.394 [2024-07-25 12:08:13.389081] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:26.394 [2024-07-25 12:08:13.389083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@862 -- # return 0 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@24 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:26.962 [2024-07-25 12:08:14.088238] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@25 -- # timing_exit start_nvmf_tgt 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@27 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:26.962 Malloc0 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@28 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@31 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 --nguid ABCDEF0123456789ABCDEF0123456789 --eui64 ABCDEF0123456789 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@34 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:26.962 [2024-07-25 12:08:14.176419] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@35 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@37 -- # rpc_cmd nvmf_get_subsystems 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:26.962 [ 00:22:26.962 { 00:22:26.962 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:22:26.962 "subtype": "Discovery", 00:22:26.962 "listen_addresses": [ 00:22:26.962 { 00:22:26.962 "trtype": "TCP", 00:22:26.962 "adrfam": "IPv4", 00:22:26.962 "traddr": "10.0.0.2", 00:22:26.962 "trsvcid": "4420" 00:22:26.962 } 00:22:26.962 ], 00:22:26.962 "allow_any_host": true, 00:22:26.962 "hosts": [] 00:22:26.962 }, 00:22:26.962 { 00:22:26.962 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:22:26.962 "subtype": "NVMe", 00:22:26.962 "listen_addresses": [ 00:22:26.962 { 00:22:26.962 "trtype": "TCP", 00:22:26.962 "adrfam": "IPv4", 00:22:26.962 "traddr": "10.0.0.2", 00:22:26.962 "trsvcid": "4420" 00:22:26.962 } 00:22:26.962 ], 00:22:26.962 "allow_any_host": true, 00:22:26.962 "hosts": [], 00:22:26.962 "serial_number": "SPDK00000000000001", 00:22:26.962 "model_number": "SPDK bdev Controller", 00:22:26.962 "max_namespaces": 32, 00:22:26.962 "min_cntlid": 1, 00:22:26.962 "max_cntlid": 65519, 00:22:26.962 "namespaces": [ 00:22:26.962 { 00:22:26.962 "nsid": 1, 00:22:26.962 "bdev_name": "Malloc0", 00:22:26.962 "name": "Malloc0", 00:22:26.962 "nguid": "ABCDEF0123456789ABCDEF0123456789", 00:22:26.962 "eui64": "ABCDEF0123456789", 00:22:26.962 "uuid": "7d1cf4d7-83b5-4e1c-9709-5ba9d8dd30fb" 00:22:26.962 } 00:22:26.962 ] 00:22:26.962 } 00:22:26.962 ] 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:26.962 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' -L all 00:22:27.224 [2024-07-25 12:08:14.226587] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:22:27.224 [2024-07-25 12:08:14.226621] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407125 ] 00:22:27.224 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.224 [2024-07-25 12:08:14.256522] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to connect adminq (no timeout) 00:22:27.224 [2024-07-25 12:08:14.256569] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:27.224 [2024-07-25 12:08:14.256574] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:27.224 [2024-07-25 12:08:14.256584] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:27.224 [2024-07-25 12:08:14.256591] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:27.224 [2024-07-25 12:08:14.257241] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for connect adminq (no timeout) 00:22:27.224 [2024-07-25 12:08:14.257267] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0x1fd2ec0 0 00:22:27.224 [2024-07-25 12:08:14.272051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:27.224 [2024-07-25 12:08:14.272070] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:27.224 [2024-07-25 12:08:14.272075] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:27.224 [2024-07-25 12:08:14.272078] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:27.224 [2024-07-25 12:08:14.272114] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.224 [2024-07-25 12:08:14.272120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.224 [2024-07-25 12:08:14.272124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2ec0) 00:22:27.224 [2024-07-25 12:08:14.272134] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:27.224 [2024-07-25 12:08:14.272150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2055e40, cid 0, qid 0 00:22:27.224 [2024-07-25 12:08:14.280055] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.224 [2024-07-25 12:08:14.280064] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.224 [2024-07-25 12:08:14.280067] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.224 [2024-07-25 12:08:14.280071] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2055e40) on tqpair=0x1fd2ec0 00:22:27.224 [2024-07-25 12:08:14.280081] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:27.224 [2024-07-25 12:08:14.280087] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs (no timeout) 00:22:27.224 [2024-07-25 12:08:14.280092] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read vs wait for vs (no timeout) 00:22:27.224 [2024-07-25 12:08:14.280106] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.224 [2024-07-25 12:08:14.280110] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.224 [2024-07-25 12:08:14.280113] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2ec0) 00:22:27.224 [2024-07-25 12:08:14.280119] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.224 [2024-07-25 12:08:14.280132] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2055e40, cid 0, qid 0 00:22:27.224 [2024-07-25 12:08:14.280370] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.225 [2024-07-25 12:08:14.280381] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.225 [2024-07-25 12:08:14.280389] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.280392] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2055e40) on tqpair=0x1fd2ec0 00:22:27.225 [2024-07-25 12:08:14.280400] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap (no timeout) 00:22:27.225 [2024-07-25 12:08:14.280409] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to read cap wait for cap (no timeout) 00:22:27.225 [2024-07-25 12:08:14.280416] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.280420] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.280423] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2ec0) 00:22:27.225 [2024-07-25 12:08:14.280431] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.225 [2024-07-25 12:08:14.280444] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2055e40, cid 0, qid 0 00:22:27.225 [2024-07-25 12:08:14.280615] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.225 [2024-07-25 12:08:14.280625] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.225 [2024-07-25 12:08:14.280628] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.280632] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2055e40) on tqpair=0x1fd2ec0 00:22:27.225 [2024-07-25 12:08:14.280637] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en (no timeout) 00:22:27.225 [2024-07-25 12:08:14.280646] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to check en wait for cc (timeout 15000 ms) 00:22:27.225 [2024-07-25 12:08:14.280653] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.280657] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.280660] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2ec0) 00:22:27.225 [2024-07-25 12:08:14.280667] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.225 [2024-07-25 12:08:14.280679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2055e40, cid 0, qid 0 00:22:27.225 [2024-07-25 12:08:14.280859] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.225 [2024-07-25 12:08:14.280869] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.225 [2024-07-25 12:08:14.280872] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.280876] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2055e40) on tqpair=0x1fd2ec0 00:22:27.225 [2024-07-25 12:08:14.280881] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:27.225 [2024-07-25 12:08:14.280892] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.280896] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.280900] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2ec0) 00:22:27.225 [2024-07-25 12:08:14.280906] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.225 [2024-07-25 12:08:14.280918] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2055e40, cid 0, qid 0 00:22:27.225 [2024-07-25 12:08:14.281116] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.225 [2024-07-25 12:08:14.281127] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.225 [2024-07-25 12:08:14.281130] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.281134] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2055e40) on tqpair=0x1fd2ec0 00:22:27.225 [2024-07-25 12:08:14.281142] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 0 && CSTS.RDY = 0 00:22:27.225 [2024-07-25 12:08:14.281147] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to controller is disabled (timeout 15000 ms) 00:22:27.225 [2024-07-25 12:08:14.281155] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:27.225 [2024-07-25 12:08:14.281260] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Setting CC.EN = 1 00:22:27.225 [2024-07-25 12:08:14.281265] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:27.225 [2024-07-25 12:08:14.281273] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.281277] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.281280] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2ec0) 00:22:27.225 [2024-07-25 12:08:14.281286] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.225 [2024-07-25 12:08:14.281299] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2055e40, cid 0, qid 0 00:22:27.225 [2024-07-25 12:08:14.281459] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.225 [2024-07-25 12:08:14.281469] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.225 [2024-07-25 12:08:14.281472] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.281476] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2055e40) on tqpair=0x1fd2ec0 00:22:27.225 [2024-07-25 12:08:14.281481] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:27.225 [2024-07-25 12:08:14.281491] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.281495] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.281498] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2ec0) 00:22:27.225 [2024-07-25 12:08:14.281505] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.225 [2024-07-25 12:08:14.281517] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2055e40, cid 0, qid 0 00:22:27.225 [2024-07-25 12:08:14.281705] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.225 [2024-07-25 12:08:14.281715] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.225 [2024-07-25 12:08:14.281718] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.281722] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2055e40) on tqpair=0x1fd2ec0 00:22:27.225 [2024-07-25 12:08:14.281726] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:27.225 [2024-07-25 12:08:14.281731] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to reset admin queue (timeout 30000 ms) 00:22:27.225 [2024-07-25 12:08:14.281740] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to identify controller (no timeout) 00:22:27.225 [2024-07-25 12:08:14.281748] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for identify controller (timeout 30000 ms) 00:22:27.225 [2024-07-25 12:08:14.281758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.281761] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2ec0) 00:22:27.225 [2024-07-25 12:08:14.281769] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.225 [2024-07-25 12:08:14.281784] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2055e40, cid 0, qid 0 00:22:27.225 [2024-07-25 12:08:14.281970] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.225 [2024-07-25 12:08:14.281981] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.225 [2024-07-25 12:08:14.281984] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.281988] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd2ec0): datao=0, datal=4096, cccid=0 00:22:27.225 [2024-07-25 12:08:14.281992] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2055e40) on tqpair(0x1fd2ec0): expected_datao=0, payload_size=4096 00:22:27.225 [2024-07-25 12:08:14.281996] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.282003] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.282007] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.282121] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.225 [2024-07-25 12:08:14.282131] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.225 [2024-07-25 12:08:14.282134] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.225 [2024-07-25 12:08:14.282138] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2055e40) on tqpair=0x1fd2ec0 00:22:27.225 [2024-07-25 12:08:14.282145] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_xfer_size 4294967295 00:22:27.225 [2024-07-25 12:08:14.282149] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] MDTS max_xfer_size 131072 00:22:27.225 [2024-07-25 12:08:14.282154] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] CNTLID 0x0001 00:22:27.226 [2024-07-25 12:08:14.282158] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] transport max_sges 16 00:22:27.226 [2024-07-25 12:08:14.282162] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] fuses compare and write: 1 00:22:27.226 [2024-07-25 12:08:14.282167] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to configure AER (timeout 30000 ms) 00:22:27.226 [2024-07-25 12:08:14.282176] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for configure aer (timeout 30000 ms) 00:22:27.226 [2024-07-25 12:08:14.282187] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.282190] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.282193] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2ec0) 00:22:27.226 [2024-07-25 12:08:14.282200] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:27.226 [2024-07-25 12:08:14.282214] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2055e40, cid 0, qid 0 00:22:27.226 [2024-07-25 12:08:14.282374] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.226 [2024-07-25 12:08:14.282384] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.226 [2024-07-25 12:08:14.282386] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.282390] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2055e40) on tqpair=0x1fd2ec0 00:22:27.226 [2024-07-25 12:08:14.282397] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.282401] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.282404] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0x1fd2ec0) 00:22:27.226 [2024-07-25 12:08:14.282410] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.226 [2024-07-25 12:08:14.282415] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.282422] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.282425] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0x1fd2ec0) 00:22:27.226 [2024-07-25 12:08:14.282429] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.226 [2024-07-25 12:08:14.282434] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.282438] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.282441] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0x1fd2ec0) 00:22:27.226 [2024-07-25 12:08:14.282446] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.226 [2024-07-25 12:08:14.282451] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.282454] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.282457] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd2ec0) 00:22:27.226 [2024-07-25 12:08:14.282462] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.226 [2024-07-25 12:08:14.282466] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to set keep alive timeout (timeout 30000 ms) 00:22:27.226 [2024-07-25 12:08:14.282479] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:27.226 [2024-07-25 12:08:14.282485] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.282488] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd2ec0) 00:22:27.226 [2024-07-25 12:08:14.282494] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.226 [2024-07-25 12:08:14.282507] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2055e40, cid 0, qid 0 00:22:27.226 [2024-07-25 12:08:14.282512] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2055fc0, cid 1, qid 0 00:22:27.226 [2024-07-25 12:08:14.282516] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2056140, cid 2, qid 0 00:22:27.226 [2024-07-25 12:08:14.282520] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20562c0, cid 3, qid 0 00:22:27.226 [2024-07-25 12:08:14.282524] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2056440, cid 4, qid 0 00:22:27.226 [2024-07-25 12:08:14.282728] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.226 [2024-07-25 12:08:14.282738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.226 [2024-07-25 12:08:14.282741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.282745] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2056440) on tqpair=0x1fd2ec0 00:22:27.226 [2024-07-25 12:08:14.282749] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Sending keep alive every 5000000 us 00:22:27.226 [2024-07-25 12:08:14.282755] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] setting state to ready (no timeout) 00:22:27.226 [2024-07-25 12:08:14.282766] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.282770] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd2ec0) 00:22:27.226 [2024-07-25 12:08:14.282777] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.226 [2024-07-25 12:08:14.282788] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2056440, cid 4, qid 0 00:22:27.226 [2024-07-25 12:08:14.282944] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.226 [2024-07-25 12:08:14.282957] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.226 [2024-07-25 12:08:14.282960] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.282963] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd2ec0): datao=0, datal=4096, cccid=4 00:22:27.226 [2024-07-25 12:08:14.282967] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2056440) on tqpair(0x1fd2ec0): expected_datao=0, payload_size=4096 00:22:27.226 [2024-07-25 12:08:14.282971] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.283257] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.283261] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.283416] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.226 [2024-07-25 12:08:14.283426] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.226 [2024-07-25 12:08:14.283429] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.283432] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2056440) on tqpair=0x1fd2ec0 00:22:27.226 [2024-07-25 12:08:14.283445] nvme_ctrlr.c:4160:nvme_ctrlr_process_init: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Ctrlr already in ready state 00:22:27.226 [2024-07-25 12:08:14.283468] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.283472] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd2ec0) 00:22:27.226 [2024-07-25 12:08:14.283479] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.226 [2024-07-25 12:08:14.283485] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.283488] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.283491] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0x1fd2ec0) 00:22:27.226 [2024-07-25 12:08:14.283496] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.226 [2024-07-25 12:08:14.283512] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2056440, cid 4, qid 0 00:22:27.226 [2024-07-25 12:08:14.283517] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20565c0, cid 5, qid 0 00:22:27.226 [2024-07-25 12:08:14.283703] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.226 [2024-07-25 12:08:14.283713] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.226 [2024-07-25 12:08:14.283716] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.283720] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd2ec0): datao=0, datal=1024, cccid=4 00:22:27.226 [2024-07-25 12:08:14.283724] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2056440) on tqpair(0x1fd2ec0): expected_datao=0, payload_size=1024 00:22:27.226 [2024-07-25 12:08:14.283728] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.283734] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.283737] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.283742] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.226 [2024-07-25 12:08:14.283747] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.226 [2024-07-25 12:08:14.283750] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.226 [2024-07-25 12:08:14.283753] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20565c0) on tqpair=0x1fd2ec0 00:22:27.227 [2024-07-25 12:08:14.326052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.227 [2024-07-25 12:08:14.326063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.227 [2024-07-25 12:08:14.326066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.227 [2024-07-25 12:08:14.326070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2056440) on tqpair=0x1fd2ec0 00:22:27.227 [2024-07-25 12:08:14.326090] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.227 [2024-07-25 12:08:14.326094] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd2ec0) 00:22:27.227 [2024-07-25 12:08:14.326101] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:02ff0070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.227 [2024-07-25 12:08:14.326117] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2056440, cid 4, qid 0 00:22:27.227 [2024-07-25 12:08:14.326356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.227 [2024-07-25 12:08:14.326367] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.227 [2024-07-25 12:08:14.326370] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.227 [2024-07-25 12:08:14.326373] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd2ec0): datao=0, datal=3072, cccid=4 00:22:27.227 [2024-07-25 12:08:14.326377] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2056440) on tqpair(0x1fd2ec0): expected_datao=0, payload_size=3072 00:22:27.227 [2024-07-25 12:08:14.326381] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.227 [2024-07-25 12:08:14.326387] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.227 [2024-07-25 12:08:14.326390] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.227 [2024-07-25 12:08:14.326498] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.227 [2024-07-25 12:08:14.326508] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.227 [2024-07-25 12:08:14.326511] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.227 [2024-07-25 12:08:14.326515] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2056440) on tqpair=0x1fd2ec0 00:22:27.227 [2024-07-25 12:08:14.326524] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.227 [2024-07-25 12:08:14.326528] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0x1fd2ec0) 00:22:27.227 [2024-07-25 12:08:14.326534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:0 cdw10:00010070 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.227 [2024-07-25 12:08:14.326551] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x2056440, cid 4, qid 0 00:22:27.227 [2024-07-25 12:08:14.326744] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.227 [2024-07-25 12:08:14.326753] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.227 [2024-07-25 12:08:14.326757] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.227 [2024-07-25 12:08:14.326760] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0x1fd2ec0): datao=0, datal=8, cccid=4 00:22:27.227 [2024-07-25 12:08:14.326764] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0x2056440) on tqpair(0x1fd2ec0): expected_datao=0, payload_size=8 00:22:27.227 [2024-07-25 12:08:14.326767] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.227 [2024-07-25 12:08:14.326773] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.227 [2024-07-25 12:08:14.326777] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.227 [2024-07-25 12:08:14.367520] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.227 [2024-07-25 12:08:14.367530] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.227 [2024-07-25 12:08:14.367533] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.227 [2024-07-25 12:08:14.367536] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2056440) on tqpair=0x1fd2ec0 00:22:27.227 ===================================================== 00:22:27.227 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2014-08.org.nvmexpress.discovery 00:22:27.227 ===================================================== 00:22:27.227 Controller Capabilities/Features 00:22:27.227 ================================ 00:22:27.227 Vendor ID: 0000 00:22:27.227 Subsystem Vendor ID: 0000 00:22:27.227 Serial Number: .................... 00:22:27.227 Model Number: ........................................ 00:22:27.227 Firmware Version: 24.09 00:22:27.227 Recommended Arb Burst: 0 00:22:27.227 IEEE OUI Identifier: 00 00 00 00:22:27.227 Multi-path I/O 00:22:27.227 May have multiple subsystem ports: No 00:22:27.227 May have multiple controllers: No 00:22:27.227 Associated with SR-IOV VF: No 00:22:27.227 Max Data Transfer Size: 131072 00:22:27.227 Max Number of Namespaces: 0 00:22:27.227 Max Number of I/O Queues: 1024 00:22:27.227 NVMe Specification Version (VS): 1.3 00:22:27.227 NVMe Specification Version (Identify): 1.3 00:22:27.227 Maximum Queue Entries: 128 00:22:27.227 Contiguous Queues Required: Yes 00:22:27.227 Arbitration Mechanisms Supported 00:22:27.227 Weighted Round Robin: Not Supported 00:22:27.227 Vendor Specific: Not Supported 00:22:27.227 Reset Timeout: 15000 ms 00:22:27.227 Doorbell Stride: 4 bytes 00:22:27.227 NVM Subsystem Reset: Not Supported 00:22:27.227 Command Sets Supported 00:22:27.227 NVM Command Set: Supported 00:22:27.227 Boot Partition: Not Supported 00:22:27.227 Memory Page Size Minimum: 4096 bytes 00:22:27.227 Memory Page Size Maximum: 4096 bytes 00:22:27.227 Persistent Memory Region: Not Supported 00:22:27.227 Optional Asynchronous Events Supported 00:22:27.227 Namespace Attribute Notices: Not Supported 00:22:27.227 Firmware Activation Notices: Not Supported 00:22:27.227 ANA Change Notices: Not Supported 00:22:27.227 PLE Aggregate Log Change Notices: Not Supported 00:22:27.227 LBA Status Info Alert Notices: Not Supported 00:22:27.227 EGE Aggregate Log Change Notices: Not Supported 00:22:27.227 Normal NVM Subsystem Shutdown event: Not Supported 00:22:27.227 Zone Descriptor Change Notices: Not Supported 00:22:27.227 Discovery Log Change Notices: Supported 00:22:27.227 Controller Attributes 00:22:27.227 128-bit Host Identifier: Not Supported 00:22:27.227 Non-Operational Permissive Mode: Not Supported 00:22:27.227 NVM Sets: Not Supported 00:22:27.227 Read Recovery Levels: Not Supported 00:22:27.227 Endurance Groups: Not Supported 00:22:27.227 Predictable Latency Mode: Not Supported 00:22:27.227 Traffic Based Keep ALive: Not Supported 00:22:27.227 Namespace Granularity: Not Supported 00:22:27.227 SQ Associations: Not Supported 00:22:27.227 UUID List: Not Supported 00:22:27.227 Multi-Domain Subsystem: Not Supported 00:22:27.227 Fixed Capacity Management: Not Supported 00:22:27.227 Variable Capacity Management: Not Supported 00:22:27.227 Delete Endurance Group: Not Supported 00:22:27.227 Delete NVM Set: Not Supported 00:22:27.227 Extended LBA Formats Supported: Not Supported 00:22:27.227 Flexible Data Placement Supported: Not Supported 00:22:27.227 00:22:27.227 Controller Memory Buffer Support 00:22:27.227 ================================ 00:22:27.227 Supported: No 00:22:27.227 00:22:27.227 Persistent Memory Region Support 00:22:27.227 ================================ 00:22:27.227 Supported: No 00:22:27.227 00:22:27.227 Admin Command Set Attributes 00:22:27.227 ============================ 00:22:27.227 Security Send/Receive: Not Supported 00:22:27.227 Format NVM: Not Supported 00:22:27.227 Firmware Activate/Download: Not Supported 00:22:27.227 Namespace Management: Not Supported 00:22:27.227 Device Self-Test: Not Supported 00:22:27.227 Directives: Not Supported 00:22:27.227 NVMe-MI: Not Supported 00:22:27.227 Virtualization Management: Not Supported 00:22:27.227 Doorbell Buffer Config: Not Supported 00:22:27.227 Get LBA Status Capability: Not Supported 00:22:27.228 Command & Feature Lockdown Capability: Not Supported 00:22:27.228 Abort Command Limit: 1 00:22:27.228 Async Event Request Limit: 4 00:22:27.228 Number of Firmware Slots: N/A 00:22:27.228 Firmware Slot 1 Read-Only: N/A 00:22:27.228 Firmware Activation Without Reset: N/A 00:22:27.228 Multiple Update Detection Support: N/A 00:22:27.228 Firmware Update Granularity: No Information Provided 00:22:27.228 Per-Namespace SMART Log: No 00:22:27.228 Asymmetric Namespace Access Log Page: Not Supported 00:22:27.228 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:22:27.228 Command Effects Log Page: Not Supported 00:22:27.228 Get Log Page Extended Data: Supported 00:22:27.228 Telemetry Log Pages: Not Supported 00:22:27.228 Persistent Event Log Pages: Not Supported 00:22:27.228 Supported Log Pages Log Page: May Support 00:22:27.228 Commands Supported & Effects Log Page: Not Supported 00:22:27.228 Feature Identifiers & Effects Log Page:May Support 00:22:27.228 NVMe-MI Commands & Effects Log Page: May Support 00:22:27.228 Data Area 4 for Telemetry Log: Not Supported 00:22:27.228 Error Log Page Entries Supported: 128 00:22:27.228 Keep Alive: Not Supported 00:22:27.228 00:22:27.228 NVM Command Set Attributes 00:22:27.228 ========================== 00:22:27.228 Submission Queue Entry Size 00:22:27.228 Max: 1 00:22:27.228 Min: 1 00:22:27.228 Completion Queue Entry Size 00:22:27.228 Max: 1 00:22:27.228 Min: 1 00:22:27.228 Number of Namespaces: 0 00:22:27.228 Compare Command: Not Supported 00:22:27.228 Write Uncorrectable Command: Not Supported 00:22:27.228 Dataset Management Command: Not Supported 00:22:27.228 Write Zeroes Command: Not Supported 00:22:27.228 Set Features Save Field: Not Supported 00:22:27.228 Reservations: Not Supported 00:22:27.228 Timestamp: Not Supported 00:22:27.228 Copy: Not Supported 00:22:27.228 Volatile Write Cache: Not Present 00:22:27.228 Atomic Write Unit (Normal): 1 00:22:27.228 Atomic Write Unit (PFail): 1 00:22:27.228 Atomic Compare & Write Unit: 1 00:22:27.228 Fused Compare & Write: Supported 00:22:27.228 Scatter-Gather List 00:22:27.228 SGL Command Set: Supported 00:22:27.228 SGL Keyed: Supported 00:22:27.228 SGL Bit Bucket Descriptor: Not Supported 00:22:27.228 SGL Metadata Pointer: Not Supported 00:22:27.228 Oversized SGL: Not Supported 00:22:27.228 SGL Metadata Address: Not Supported 00:22:27.228 SGL Offset: Supported 00:22:27.228 Transport SGL Data Block: Not Supported 00:22:27.228 Replay Protected Memory Block: Not Supported 00:22:27.228 00:22:27.228 Firmware Slot Information 00:22:27.228 ========================= 00:22:27.228 Active slot: 0 00:22:27.228 00:22:27.228 00:22:27.228 Error Log 00:22:27.228 ========= 00:22:27.228 00:22:27.228 Active Namespaces 00:22:27.228 ================= 00:22:27.228 Discovery Log Page 00:22:27.228 ================== 00:22:27.228 Generation Counter: 2 00:22:27.228 Number of Records: 2 00:22:27.228 Record Format: 0 00:22:27.228 00:22:27.228 Discovery Log Entry 0 00:22:27.228 ---------------------- 00:22:27.228 Transport Type: 3 (TCP) 00:22:27.228 Address Family: 1 (IPv4) 00:22:27.228 Subsystem Type: 3 (Current Discovery Subsystem) 00:22:27.228 Entry Flags: 00:22:27.228 Duplicate Returned Information: 1 00:22:27.228 Explicit Persistent Connection Support for Discovery: 1 00:22:27.228 Transport Requirements: 00:22:27.228 Secure Channel: Not Required 00:22:27.228 Port ID: 0 (0x0000) 00:22:27.228 Controller ID: 65535 (0xffff) 00:22:27.228 Admin Max SQ Size: 128 00:22:27.228 Transport Service Identifier: 4420 00:22:27.228 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:22:27.228 Transport Address: 10.0.0.2 00:22:27.228 Discovery Log Entry 1 00:22:27.228 ---------------------- 00:22:27.228 Transport Type: 3 (TCP) 00:22:27.228 Address Family: 1 (IPv4) 00:22:27.228 Subsystem Type: 2 (NVM Subsystem) 00:22:27.228 Entry Flags: 00:22:27.228 Duplicate Returned Information: 0 00:22:27.228 Explicit Persistent Connection Support for Discovery: 0 00:22:27.228 Transport Requirements: 00:22:27.228 Secure Channel: Not Required 00:22:27.228 Port ID: 0 (0x0000) 00:22:27.228 Controller ID: 65535 (0xffff) 00:22:27.228 Admin Max SQ Size: 128 00:22:27.228 Transport Service Identifier: 4420 00:22:27.228 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:cnode1 00:22:27.228 Transport Address: 10.0.0.2 [2024-07-25 12:08:14.367612] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] Prepare to destruct SSD 00:22:27.228 [2024-07-25 12:08:14.367622] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2055e40) on tqpair=0x1fd2ec0 00:22:27.228 [2024-07-25 12:08:14.367628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.228 [2024-07-25 12:08:14.367632] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2055fc0) on tqpair=0x1fd2ec0 00:22:27.228 [2024-07-25 12:08:14.367638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.228 [2024-07-25 12:08:14.367642] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x2056140) on tqpair=0x1fd2ec0 00:22:27.228 [2024-07-25 12:08:14.367646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.228 [2024-07-25 12:08:14.367650] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20562c0) on tqpair=0x1fd2ec0 00:22:27.228 [2024-07-25 12:08:14.367654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.228 [2024-07-25 12:08:14.367664] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.228 [2024-07-25 12:08:14.367667] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.228 [2024-07-25 12:08:14.367671] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd2ec0) 00:22:27.228 [2024-07-25 12:08:14.367677] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.228 [2024-07-25 12:08:14.367692] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20562c0, cid 3, qid 0 00:22:27.228 [2024-07-25 12:08:14.367844] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.228 [2024-07-25 12:08:14.367854] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.228 [2024-07-25 12:08:14.367857] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.228 [2024-07-25 12:08:14.367861] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20562c0) on tqpair=0x1fd2ec0 00:22:27.228 [2024-07-25 12:08:14.367868] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.228 [2024-07-25 12:08:14.367871] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.228 [2024-07-25 12:08:14.367874] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd2ec0) 00:22:27.228 [2024-07-25 12:08:14.367881] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.228 [2024-07-25 12:08:14.367897] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20562c0, cid 3, qid 0 00:22:27.228 [2024-07-25 12:08:14.368098] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.228 [2024-07-25 12:08:14.368108] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.228 [2024-07-25 12:08:14.368111] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.228 [2024-07-25 12:08:14.368115] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20562c0) on tqpair=0x1fd2ec0 00:22:27.228 [2024-07-25 12:08:14.368120] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] RTD3E = 0 us 00:22:27.228 [2024-07-25 12:08:14.368125] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown timeout = 10000 ms 00:22:27.229 [2024-07-25 12:08:14.368135] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.368139] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.368142] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd2ec0) 00:22:27.229 [2024-07-25 12:08:14.368149] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.229 [2024-07-25 12:08:14.368161] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20562c0, cid 3, qid 0 00:22:27.229 [2024-07-25 12:08:14.368344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.229 [2024-07-25 12:08:14.368354] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.229 [2024-07-25 12:08:14.368357] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.368361] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20562c0) on tqpair=0x1fd2ec0 00:22:27.229 [2024-07-25 12:08:14.368376] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.368380] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.368382] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd2ec0) 00:22:27.229 [2024-07-25 12:08:14.368389] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.229 [2024-07-25 12:08:14.368400] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20562c0, cid 3, qid 0 00:22:27.229 [2024-07-25 12:08:14.368595] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.229 [2024-07-25 12:08:14.368605] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.229 [2024-07-25 12:08:14.368608] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.368611] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20562c0) on tqpair=0x1fd2ec0 00:22:27.229 [2024-07-25 12:08:14.368622] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.368626] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.368629] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd2ec0) 00:22:27.229 [2024-07-25 12:08:14.368635] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.229 [2024-07-25 12:08:14.368647] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20562c0, cid 3, qid 0 00:22:27.229 [2024-07-25 12:08:14.368999] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.229 [2024-07-25 12:08:14.369005] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.229 [2024-07-25 12:08:14.369007] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.369011] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20562c0) on tqpair=0x1fd2ec0 00:22:27.229 [2024-07-25 12:08:14.369020] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.369024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.369027] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd2ec0) 00:22:27.229 [2024-07-25 12:08:14.369032] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.229 [2024-07-25 12:08:14.369047] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20562c0, cid 3, qid 0 00:22:27.229 [2024-07-25 12:08:14.369197] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.229 [2024-07-25 12:08:14.369207] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.229 [2024-07-25 12:08:14.369210] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.369213] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20562c0) on tqpair=0x1fd2ec0 00:22:27.229 [2024-07-25 12:08:14.369224] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.369228] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.369231] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd2ec0) 00:22:27.229 [2024-07-25 12:08:14.369237] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.229 [2024-07-25 12:08:14.369249] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20562c0, cid 3, qid 0 00:22:27.229 [2024-07-25 12:08:14.369442] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.229 [2024-07-25 12:08:14.369452] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.229 [2024-07-25 12:08:14.369455] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.369459] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20562c0) on tqpair=0x1fd2ec0 00:22:27.229 [2024-07-25 12:08:14.369469] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.369476] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.369479] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd2ec0) 00:22:27.229 [2024-07-25 12:08:14.369485] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.229 [2024-07-25 12:08:14.369497] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20562c0, cid 3, qid 0 00:22:27.229 [2024-07-25 12:08:14.369695] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.229 [2024-07-25 12:08:14.369704] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.229 [2024-07-25 12:08:14.369707] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.369711] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20562c0) on tqpair=0x1fd2ec0 00:22:27.229 [2024-07-25 12:08:14.369722] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.369725] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.369728] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd2ec0) 00:22:27.229 [2024-07-25 12:08:14.369735] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.229 [2024-07-25 12:08:14.369746] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20562c0, cid 3, qid 0 00:22:27.229 [2024-07-25 12:08:14.369898] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.229 [2024-07-25 12:08:14.369907] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.229 [2024-07-25 12:08:14.369910] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.369914] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20562c0) on tqpair=0x1fd2ec0 00:22:27.229 [2024-07-25 12:08:14.369924] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.369928] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.369931] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd2ec0) 00:22:27.229 [2024-07-25 12:08:14.369937] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.229 [2024-07-25 12:08:14.369949] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20562c0, cid 3, qid 0 00:22:27.229 [2024-07-25 12:08:14.374051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.229 [2024-07-25 12:08:14.374058] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.229 [2024-07-25 12:08:14.374061] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.374064] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20562c0) on tqpair=0x1fd2ec0 00:22:27.229 [2024-07-25 12:08:14.374073] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.374077] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.374080] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0x1fd2ec0) 00:22:27.229 [2024-07-25 12:08:14.374086] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.229 [2024-07-25 12:08:14.374098] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0x20562c0, cid 3, qid 0 00:22:27.229 [2024-07-25 12:08:14.374370] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.229 [2024-07-25 12:08:14.374379] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.229 [2024-07-25 12:08:14.374383] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.229 [2024-07-25 12:08:14.374386] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0x20562c0) on tqpair=0x1fd2ec0 00:22:27.229 [2024-07-25 12:08:14.374394] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2014-08.org.nvmexpress.discovery] shutdown complete in 6 milliseconds 00:22:27.229 00:22:27.229 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' -L all 00:22:27.229 [2024-07-25 12:08:14.416687] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:22:27.230 [2024-07-25 12:08:14.416737] [ DPDK EAL parameters: identify --no-shconf -c 0x1 -n 1 -m 0 --no-pci --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid407183 ] 00:22:27.230 EAL: No free 2048 kB hugepages reported on node 1 00:22:27.230 [2024-07-25 12:08:14.446311] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to connect adminq (no timeout) 00:22:27.230 [2024-07-25 12:08:14.446354] nvme_tcp.c:2338:nvme_tcp_qpair_connect_sock: *DEBUG*: adrfam 1 ai_family 2 00:22:27.230 [2024-07-25 12:08:14.446359] nvme_tcp.c:2342:nvme_tcp_qpair_connect_sock: *DEBUG*: trsvcid is 4420 00:22:27.230 [2024-07-25 12:08:14.446372] nvme_tcp.c:2360:nvme_tcp_qpair_connect_sock: *DEBUG*: sock_impl_name is (null) 00:22:27.230 [2024-07-25 12:08:14.446380] sock.c: 373:spdk_sock_connect_ext: *DEBUG*: Creating a client socket using impl posix 00:22:27.230 [2024-07-25 12:08:14.446945] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for connect adminq (no timeout) 00:22:27.230 [2024-07-25 12:08:14.446968] nvme_tcp.c:1555:nvme_tcp_send_icreq_complete: *DEBUG*: Complete the icreq send for tqpair=0xf28ec0 0 00:22:27.230 [2024-07-25 12:08:14.460049] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 1 00:22:27.230 [2024-07-25 12:08:14.460070] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =1 00:22:27.230 [2024-07-25 12:08:14.460074] nvme_tcp.c:1601:nvme_tcp_icresp_handle: *DEBUG*: host_hdgst_enable: 0 00:22:27.230 [2024-07-25 12:08:14.460077] nvme_tcp.c:1602:nvme_tcp_icresp_handle: *DEBUG*: host_ddgst_enable: 0 00:22:27.230 [2024-07-25 12:08:14.460115] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.230 [2024-07-25 12:08:14.460120] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.230 [2024-07-25 12:08:14.460124] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf28ec0) 00:22:27.230 [2024-07-25 12:08:14.460135] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:0 cid:0 SGL DATA BLOCK OFFSET 0x0 len:0x400 00:22:27.230 [2024-07-25 12:08:14.460150] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfabe40, cid 0, qid 0 00:22:27.230 [2024-07-25 12:08:14.467054] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.230 [2024-07-25 12:08:14.467063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.230 [2024-07-25 12:08:14.467066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.230 [2024-07-25 12:08:14.467070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfabe40) on tqpair=0xf28ec0 00:22:27.230 [2024-07-25 12:08:14.467079] nvme_fabric.c: 622:_nvme_fabric_qpair_connect_poll: *DEBUG*: CNTLID 0x0001 00:22:27.230 [2024-07-25 12:08:14.467085] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs (no timeout) 00:22:27.230 [2024-07-25 12:08:14.467089] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read vs wait for vs (no timeout) 00:22:27.230 [2024-07-25 12:08:14.467101] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.230 [2024-07-25 12:08:14.467105] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.230 [2024-07-25 12:08:14.467108] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf28ec0) 00:22:27.230 [2024-07-25 12:08:14.467114] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.230 [2024-07-25 12:08:14.467127] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfabe40, cid 0, qid 0 00:22:27.230 [2024-07-25 12:08:14.467356] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.230 [2024-07-25 12:08:14.467370] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.230 [2024-07-25 12:08:14.467373] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.230 [2024-07-25 12:08:14.467377] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfabe40) on tqpair=0xf28ec0 00:22:27.230 [2024-07-25 12:08:14.467386] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap (no timeout) 00:22:27.230 [2024-07-25 12:08:14.467395] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to read cap wait for cap (no timeout) 00:22:27.230 [2024-07-25 12:08:14.467404] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.230 [2024-07-25 12:08:14.467407] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.230 [2024-07-25 12:08:14.467411] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf28ec0) 00:22:27.230 [2024-07-25 12:08:14.467419] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.230 [2024-07-25 12:08:14.467433] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfabe40, cid 0, qid 0 00:22:27.230 [2024-07-25 12:08:14.467586] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.230 [2024-07-25 12:08:14.467596] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.230 [2024-07-25 12:08:14.467599] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.230 [2024-07-25 12:08:14.467602] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfabe40) on tqpair=0xf28ec0 00:22:27.230 [2024-07-25 12:08:14.467607] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en (no timeout) 00:22:27.230 [2024-07-25 12:08:14.467616] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to check en wait for cc (timeout 15000 ms) 00:22:27.230 [2024-07-25 12:08:14.467623] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.230 [2024-07-25 12:08:14.467626] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.230 [2024-07-25 12:08:14.467629] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf28ec0) 00:22:27.230 [2024-07-25 12:08:14.467637] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.230 [2024-07-25 12:08:14.467649] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfabe40, cid 0, qid 0 00:22:27.230 [2024-07-25 12:08:14.467803] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.230 [2024-07-25 12:08:14.467812] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.230 [2024-07-25 12:08:14.467815] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.230 [2024-07-25 12:08:14.467819] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfabe40) on tqpair=0xf28ec0 00:22:27.230 [2024-07-25 12:08:14.467824] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to disable and wait for CSTS.RDY = 0 (timeout 15000 ms) 00:22:27.230 [2024-07-25 12:08:14.467834] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.230 [2024-07-25 12:08:14.467838] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.230 [2024-07-25 12:08:14.467841] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf28ec0) 00:22:27.230 [2024-07-25 12:08:14.467848] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.230 [2024-07-25 12:08:14.467860] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfabe40, cid 0, qid 0 00:22:27.230 [2024-07-25 12:08:14.468009] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.230 [2024-07-25 12:08:14.468019] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.230 [2024-07-25 12:08:14.468022] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.468029] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfabe40) on tqpair=0xf28ec0 00:22:27.231 [2024-07-25 12:08:14.468033] nvme_ctrlr.c:3869:nvme_ctrlr_process_init_wait_for_ready_0: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 0 && CSTS.RDY = 0 00:22:27.231 [2024-07-25 12:08:14.468037] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to controller is disabled (timeout 15000 ms) 00:22:27.231 [2024-07-25 12:08:14.468052] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 (timeout 15000 ms) 00:22:27.231 [2024-07-25 12:08:14.468157] nvme_ctrlr.c:4062:nvme_ctrlr_process_init: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Setting CC.EN = 1 00:22:27.231 [2024-07-25 12:08:14.468161] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to enable controller by writing CC.EN = 1 reg (timeout 15000 ms) 00:22:27.231 [2024-07-25 12:08:14.468169] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.468172] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.468176] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf28ec0) 00:22:27.231 [2024-07-25 12:08:14.468182] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.231 [2024-07-25 12:08:14.468195] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfabe40, cid 0, qid 0 00:22:27.231 [2024-07-25 12:08:14.468344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.231 [2024-07-25 12:08:14.468353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.231 [2024-07-25 12:08:14.468356] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.468360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfabe40) on tqpair=0xf28ec0 00:22:27.231 [2024-07-25 12:08:14.468364] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for CSTS.RDY = 1 (timeout 15000 ms) 00:22:27.231 [2024-07-25 12:08:14.468374] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.468378] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.468381] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf28ec0) 00:22:27.231 [2024-07-25 12:08:14.468388] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:0 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.231 [2024-07-25 12:08:14.468399] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfabe40, cid 0, qid 0 00:22:27.231 [2024-07-25 12:08:14.468558] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.231 [2024-07-25 12:08:14.468567] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.231 [2024-07-25 12:08:14.468570] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.468574] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfabe40) on tqpair=0xf28ec0 00:22:27.231 [2024-07-25 12:08:14.468578] nvme_ctrlr.c:3904:nvme_ctrlr_process_init_enable_wait_for_ready_1: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CC.EN = 1 && CSTS.RDY = 1 - controller is ready 00:22:27.231 [2024-07-25 12:08:14.468582] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to reset admin queue (timeout 30000 ms) 00:22:27.231 [2024-07-25 12:08:14.468591] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller (no timeout) 00:22:27.231 [2024-07-25 12:08:14.468604] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify controller (timeout 30000 ms) 00:22:27.231 [2024-07-25 12:08:14.468612] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.468616] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf28ec0) 00:22:27.231 [2024-07-25 12:08:14.468622] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:0 nsid:0 cdw10:00000001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.231 [2024-07-25 12:08:14.468636] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfabe40, cid 0, qid 0 00:22:27.231 [2024-07-25 12:08:14.468823] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.231 [2024-07-25 12:08:14.468833] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.231 [2024-07-25 12:08:14.468836] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.468840] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf28ec0): datao=0, datal=4096, cccid=0 00:22:27.231 [2024-07-25 12:08:14.468844] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfabe40) on tqpair(0xf28ec0): expected_datao=0, payload_size=4096 00:22:27.231 [2024-07-25 12:08:14.468848] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.468854] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.468857] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.469143] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.231 [2024-07-25 12:08:14.469149] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.231 [2024-07-25 12:08:14.469152] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.469155] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfabe40) on tqpair=0xf28ec0 00:22:27.231 [2024-07-25 12:08:14.469162] nvme_ctrlr.c:2053:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_xfer_size 4294967295 00:22:27.231 [2024-07-25 12:08:14.469166] nvme_ctrlr.c:2057:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] MDTS max_xfer_size 131072 00:22:27.231 [2024-07-25 12:08:14.469170] nvme_ctrlr.c:2060:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] CNTLID 0x0001 00:22:27.231 [2024-07-25 12:08:14.469174] nvme_ctrlr.c:2084:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] transport max_sges 16 00:22:27.231 [2024-07-25 12:08:14.469178] nvme_ctrlr.c:2099:nvme_ctrlr_identify_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] fuses compare and write: 1 00:22:27.231 [2024-07-25 12:08:14.469182] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to configure AER (timeout 30000 ms) 00:22:27.231 [2024-07-25 12:08:14.469190] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for configure aer (timeout 30000 ms) 00:22:27.231 [2024-07-25 12:08:14.469200] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.469204] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.469207] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf28ec0) 00:22:27.231 [2024-07-25 12:08:14.469213] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES ASYNC EVENT CONFIGURATION cid:0 cdw10:0000000b SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:27.231 [2024-07-25 12:08:14.469225] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfabe40, cid 0, qid 0 00:22:27.231 [2024-07-25 12:08:14.469382] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.231 [2024-07-25 12:08:14.469391] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.231 [2024-07-25 12:08:14.469394] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.469398] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfabe40) on tqpair=0xf28ec0 00:22:27.231 [2024-07-25 12:08:14.469404] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.469408] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.469411] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=0 on tqpair(0xf28ec0) 00:22:27.231 [2024-07-25 12:08:14.469416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.231 [2024-07-25 12:08:14.469422] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.469428] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.469431] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=1 on tqpair(0xf28ec0) 00:22:27.231 [2024-07-25 12:08:14.469436] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.231 [2024-07-25 12:08:14.469441] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.469444] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.469447] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=2 on tqpair(0xf28ec0) 00:22:27.231 [2024-07-25 12:08:14.469452] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.231 [2024-07-25 12:08:14.469457] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.469460] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.469463] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.231 [2024-07-25 12:08:14.469468] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.231 [2024-07-25 12:08:14.469472] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set keep alive timeout (timeout 30000 ms) 00:22:27.231 [2024-07-25 12:08:14.469484] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set keep alive timeout (timeout 30000 ms) 00:22:27.231 [2024-07-25 12:08:14.469490] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.231 [2024-07-25 12:08:14.469493] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf28ec0) 00:22:27.232 [2024-07-25 12:08:14.469499] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES KEEP ALIVE TIMER cid:4 cdw10:0000000f SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.232 [2024-07-25 12:08:14.469512] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfabe40, cid 0, qid 0 00:22:27.232 [2024-07-25 12:08:14.469516] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfabfc0, cid 1, qid 0 00:22:27.232 [2024-07-25 12:08:14.469520] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac140, cid 2, qid 0 00:22:27.232 [2024-07-25 12:08:14.469524] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.232 [2024-07-25 12:08:14.469528] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac440, cid 4, qid 0 00:22:27.232 [2024-07-25 12:08:14.469718] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.232 [2024-07-25 12:08:14.469728] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.232 [2024-07-25 12:08:14.469731] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.232 [2024-07-25 12:08:14.469734] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac440) on tqpair=0xf28ec0 00:22:27.232 [2024-07-25 12:08:14.469739] nvme_ctrlr.c:3022:nvme_ctrlr_set_keep_alive_timeout_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Sending keep alive every 5000000 us 00:22:27.232 [2024-07-25 12:08:14.469744] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify controller iocs specific (timeout 30000 ms) 00:22:27.232 [2024-07-25 12:08:14.469755] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set number of queues (timeout 30000 ms) 00:22:27.232 [2024-07-25 12:08:14.469761] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for set number of queues (timeout 30000 ms) 00:22:27.232 [2024-07-25 12:08:14.469768] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.232 [2024-07-25 12:08:14.469771] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.232 [2024-07-25 12:08:14.469774] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf28ec0) 00:22:27.232 [2024-07-25 12:08:14.469780] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: SET FEATURES NUMBER OF QUEUES cid:4 cdw10:00000007 SGL DATA BLOCK OFFSET 0x0 len:0x0 00:22:27.232 [2024-07-25 12:08:14.469795] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac440, cid 4, qid 0 00:22:27.232 [2024-07-25 12:08:14.469949] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.232 [2024-07-25 12:08:14.469959] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.232 [2024-07-25 12:08:14.469962] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.232 [2024-07-25 12:08:14.469965] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac440) on tqpair=0xf28ec0 00:22:27.232 [2024-07-25 12:08:14.470021] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify active ns (timeout 30000 ms) 00:22:27.232 [2024-07-25 12:08:14.470032] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify active ns (timeout 30000 ms) 00:22:27.232 [2024-07-25 12:08:14.470039] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.232 [2024-07-25 12:08:14.470049] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf28ec0) 00:22:27.232 [2024-07-25 12:08:14.470055] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:0 cdw10:00000002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.232 [2024-07-25 12:08:14.470068] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac440, cid 4, qid 0 00:22:27.232 [2024-07-25 12:08:14.470234] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.232 [2024-07-25 12:08:14.470244] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.232 [2024-07-25 12:08:14.470247] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.232 [2024-07-25 12:08:14.470250] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf28ec0): datao=0, datal=4096, cccid=4 00:22:27.232 [2024-07-25 12:08:14.470255] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfac440) on tqpair(0xf28ec0): expected_datao=0, payload_size=4096 00:22:27.232 [2024-07-25 12:08:14.470258] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.232 [2024-07-25 12:08:14.470264] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.232 [2024-07-25 12:08:14.470268] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.232 [2024-07-25 12:08:14.470557] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.232 [2024-07-25 12:08:14.470562] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.232 [2024-07-25 12:08:14.470565] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.232 [2024-07-25 12:08:14.470568] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac440) on tqpair=0xf28ec0 00:22:27.232 [2024-07-25 12:08:14.470578] nvme_ctrlr.c:4693:spdk_nvme_ctrlr_get_ns: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Namespace 1 was added 00:22:27.232 [2024-07-25 12:08:14.470588] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns (timeout 30000 ms) 00:22:27.232 [2024-07-25 12:08:14.470597] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify ns (timeout 30000 ms) 00:22:27.232 [2024-07-25 12:08:14.470603] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.232 [2024-07-25 12:08:14.470607] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf28ec0) 00:22:27.232 [2024-07-25 12:08:14.470613] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000000 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.232 [2024-07-25 12:08:14.470625] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac440, cid 4, qid 0 00:22:27.232 [2024-07-25 12:08:14.470797] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.232 [2024-07-25 12:08:14.470807] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.232 [2024-07-25 12:08:14.470810] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.232 [2024-07-25 12:08:14.470813] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf28ec0): datao=0, datal=4096, cccid=4 00:22:27.232 [2024-07-25 12:08:14.470822] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfac440) on tqpair(0xf28ec0): expected_datao=0, payload_size=4096 00:22:27.232 [2024-07-25 12:08:14.470826] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.232 [2024-07-25 12:08:14.470832] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.232 [2024-07-25 12:08:14.470836] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.494 [2024-07-25 12:08:14.475052] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.494 [2024-07-25 12:08:14.475060] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.494 [2024-07-25 12:08:14.475063] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.494 [2024-07-25 12:08:14.475067] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac440) on tqpair=0xf28ec0 00:22:27.494 [2024-07-25 12:08:14.475081] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify namespace id descriptors (timeout 30000 ms) 00:22:27.494 [2024-07-25 12:08:14.475091] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to wait for identify namespace id descriptors (timeout 30000 ms) 00:22:27.494 [2024-07-25 12:08:14.475098] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.494 [2024-07-25 12:08:14.475102] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf28ec0) 00:22:27.494 [2024-07-25 12:08:14.475108] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: IDENTIFY (06) qid:0 cid:4 nsid:1 cdw10:00000003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.494 [2024-07-25 12:08:14.475121] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac440, cid 4, qid 0 00:22:27.494 [2024-07-25 12:08:14.475366] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.494 [2024-07-25 12:08:14.475376] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.494 [2024-07-25 12:08:14.475379] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.494 [2024-07-25 12:08:14.475382] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf28ec0): datao=0, datal=4096, cccid=4 00:22:27.494 [2024-07-25 12:08:14.475386] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfac440) on tqpair(0xf28ec0): expected_datao=0, payload_size=4096 00:22:27.494 [2024-07-25 12:08:14.475390] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.494 [2024-07-25 12:08:14.475396] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.494 [2024-07-25 12:08:14.475399] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.494 [2024-07-25 12:08:14.475700] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.494 [2024-07-25 12:08:14.475706] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.494 [2024-07-25 12:08:14.475709] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.494 [2024-07-25 12:08:14.475712] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac440) on tqpair=0xf28ec0 00:22:27.494 [2024-07-25 12:08:14.475720] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to identify ns iocs specific (timeout 30000 ms) 00:22:27.494 [2024-07-25 12:08:14.475727] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported log pages (timeout 30000 ms) 00:22:27.494 [2024-07-25 12:08:14.475736] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set supported features (timeout 30000 ms) 00:22:27.494 [2024-07-25 12:08:14.475769] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host behavior support feature (timeout 30000 ms) 00:22:27.494 [2024-07-25 12:08:14.475773] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set doorbell buffer config (timeout 30000 ms) 00:22:27.494 [2024-07-25 12:08:14.475778] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to set host ID (timeout 30000 ms) 00:22:27.494 [2024-07-25 12:08:14.475786] nvme_ctrlr.c:3110:nvme_ctrlr_set_host_id: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] NVMe-oF transport - not sending Set Features - Host ID 00:22:27.494 [2024-07-25 12:08:14.475790] nvme_ctrlr.c:1553:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to transport ready (timeout 30000 ms) 00:22:27.494 [2024-07-25 12:08:14.475794] nvme_ctrlr.c:1559:_nvme_ctrlr_set_state: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] setting state to ready (no timeout) 00:22:27.494 [2024-07-25 12:08:14.475807] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.494 [2024-07-25 12:08:14.475811] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf28ec0) 00:22:27.494 [2024-07-25 12:08:14.475817] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ARBITRATION cid:4 cdw10:00000001 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.494 [2024-07-25 12:08:14.475823] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.494 [2024-07-25 12:08:14.475826] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.494 [2024-07-25 12:08:14.475829] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf28ec0) 00:22:27.494 [2024-07-25 12:08:14.475834] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:5 nsid:0 cdw10:00000000 cdw11:00000000 00:22:27.494 [2024-07-25 12:08:14.475849] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac440, cid 4, qid 0 00:22:27.494 [2024-07-25 12:08:14.475854] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac5c0, cid 5, qid 0 00:22:27.494 [2024-07-25 12:08:14.476122] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.494 [2024-07-25 12:08:14.476134] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.494 [2024-07-25 12:08:14.476137] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.494 [2024-07-25 12:08:14.476140] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac440) on tqpair=0xf28ec0 00:22:27.494 [2024-07-25 12:08:14.476146] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.494 [2024-07-25 12:08:14.476151] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.494 [2024-07-25 12:08:14.476154] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.476157] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac5c0) on tqpair=0xf28ec0 00:22:27.495 [2024-07-25 12:08:14.476167] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.476171] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf28ec0) 00:22:27.495 [2024-07-25 12:08:14.476177] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES POWER MANAGEMENT cid:5 cdw10:00000002 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.495 [2024-07-25 12:08:14.476190] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac5c0, cid 5, qid 0 00:22:27.495 [2024-07-25 12:08:14.476367] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.495 [2024-07-25 12:08:14.476377] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.495 [2024-07-25 12:08:14.476380] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.476384] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac5c0) on tqpair=0xf28ec0 00:22:27.495 [2024-07-25 12:08:14.476394] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.476398] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf28ec0) 00:22:27.495 [2024-07-25 12:08:14.476404] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES TEMPERATURE THRESHOLD cid:5 cdw10:00000004 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.495 [2024-07-25 12:08:14.476415] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac5c0, cid 5, qid 0 00:22:27.495 [2024-07-25 12:08:14.476771] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.495 [2024-07-25 12:08:14.476776] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.495 [2024-07-25 12:08:14.476779] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.476786] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac5c0) on tqpair=0xf28ec0 00:22:27.495 [2024-07-25 12:08:14.476794] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.476798] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf28ec0) 00:22:27.495 [2024-07-25 12:08:14.476804] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:5 cdw10:00000007 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.495 [2024-07-25 12:08:14.476814] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac5c0, cid 5, qid 0 00:22:27.495 [2024-07-25 12:08:14.476967] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.495 [2024-07-25 12:08:14.476976] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.495 [2024-07-25 12:08:14.476980] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.476983] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac5c0) on tqpair=0xf28ec0 00:22:27.495 [2024-07-25 12:08:14.477000] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.477004] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=5 on tqpair(0xf28ec0) 00:22:27.495 [2024-07-25 12:08:14.477010] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:5 nsid:ffffffff cdw10:07ff0001 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.495 [2024-07-25 12:08:14.477016] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.477020] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=4 on tqpair(0xf28ec0) 00:22:27.495 [2024-07-25 12:08:14.477025] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:4 nsid:ffffffff cdw10:007f0002 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.495 [2024-07-25 12:08:14.477031] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.477035] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=6 on tqpair(0xf28ec0) 00:22:27.495 [2024-07-25 12:08:14.477040] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:6 nsid:ffffffff cdw10:007f0003 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.495 [2024-07-25 12:08:14.477052] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.477055] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf28ec0) 00:22:27.495 [2024-07-25 12:08:14.477060] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: GET LOG PAGE (02) qid:0 cid:7 nsid:ffffffff cdw10:03ff0005 cdw11:00000000 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.495 [2024-07-25 12:08:14.477074] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac5c0, cid 5, qid 0 00:22:27.495 [2024-07-25 12:08:14.477079] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac440, cid 4, qid 0 00:22:27.495 [2024-07-25 12:08:14.477083] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac740, cid 6, qid 0 00:22:27.495 [2024-07-25 12:08:14.477087] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac8c0, cid 7, qid 0 00:22:27.495 [2024-07-25 12:08:14.477314] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.495 [2024-07-25 12:08:14.477324] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.495 [2024-07-25 12:08:14.477327] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.477330] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf28ec0): datao=0, datal=8192, cccid=5 00:22:27.495 [2024-07-25 12:08:14.477335] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfac5c0) on tqpair(0xf28ec0): expected_datao=0, payload_size=8192 00:22:27.495 [2024-07-25 12:08:14.477339] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.477911] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.477915] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.477920] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.495 [2024-07-25 12:08:14.477927] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.495 [2024-07-25 12:08:14.477930] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.477933] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf28ec0): datao=0, datal=512, cccid=4 00:22:27.495 [2024-07-25 12:08:14.477937] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfac440) on tqpair(0xf28ec0): expected_datao=0, payload_size=512 00:22:27.495 [2024-07-25 12:08:14.477941] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.477946] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.477949] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.477954] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.495 [2024-07-25 12:08:14.477958] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.495 [2024-07-25 12:08:14.477961] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.477964] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf28ec0): datao=0, datal=512, cccid=6 00:22:27.495 [2024-07-25 12:08:14.477968] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfac740) on tqpair(0xf28ec0): expected_datao=0, payload_size=512 00:22:27.495 [2024-07-25 12:08:14.477972] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.477977] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.477980] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.477985] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 7 00:22:27.495 [2024-07-25 12:08:14.477989] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =7 00:22:27.495 [2024-07-25 12:08:14.477992] nvme_tcp.c:1719:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.477995] nvme_tcp.c:1720:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: c2h_data info on tqpair(0xf28ec0): datao=0, datal=4096, cccid=7 00:22:27.495 [2024-07-25 12:08:14.477999] nvme_tcp.c:1731:nvme_tcp_c2h_data_hdr_handle: *DEBUG*: tcp_req(0xfac8c0) on tqpair(0xf28ec0): expected_datao=0, payload_size=4096 00:22:27.495 [2024-07-25 12:08:14.478003] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.478008] nvme_tcp.c:1521:nvme_tcp_pdu_payload_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.478011] nvme_tcp.c:1312:nvme_tcp_c2h_data_payload_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.478263] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.495 [2024-07-25 12:08:14.478268] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.495 [2024-07-25 12:08:14.478272] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.478275] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac5c0) on tqpair=0xf28ec0 00:22:27.495 [2024-07-25 12:08:14.478288] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.495 [2024-07-25 12:08:14.478293] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.495 [2024-07-25 12:08:14.478296] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.478299] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac440) on tqpair=0xf28ec0 00:22:27.495 [2024-07-25 12:08:14.478308] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.495 [2024-07-25 12:08:14.478313] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.495 [2024-07-25 12:08:14.478316] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.478320] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac740) on tqpair=0xf28ec0 00:22:27.495 [2024-07-25 12:08:14.478325] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.495 [2024-07-25 12:08:14.478330] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.495 [2024-07-25 12:08:14.478333] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.495 [2024-07-25 12:08:14.478336] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac8c0) on tqpair=0xf28ec0 00:22:27.495 ===================================================== 00:22:27.495 NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:27.495 ===================================================== 00:22:27.495 Controller Capabilities/Features 00:22:27.495 ================================ 00:22:27.495 Vendor ID: 8086 00:22:27.495 Subsystem Vendor ID: 8086 00:22:27.495 Serial Number: SPDK00000000000001 00:22:27.495 Model Number: SPDK bdev Controller 00:22:27.495 Firmware Version: 24.09 00:22:27.495 Recommended Arb Burst: 6 00:22:27.495 IEEE OUI Identifier: e4 d2 5c 00:22:27.495 Multi-path I/O 00:22:27.495 May have multiple subsystem ports: Yes 00:22:27.495 May have multiple controllers: Yes 00:22:27.495 Associated with SR-IOV VF: No 00:22:27.495 Max Data Transfer Size: 131072 00:22:27.495 Max Number of Namespaces: 32 00:22:27.495 Max Number of I/O Queues: 127 00:22:27.495 NVMe Specification Version (VS): 1.3 00:22:27.495 NVMe Specification Version (Identify): 1.3 00:22:27.495 Maximum Queue Entries: 128 00:22:27.495 Contiguous Queues Required: Yes 00:22:27.495 Arbitration Mechanisms Supported 00:22:27.496 Weighted Round Robin: Not Supported 00:22:27.496 Vendor Specific: Not Supported 00:22:27.496 Reset Timeout: 15000 ms 00:22:27.496 Doorbell Stride: 4 bytes 00:22:27.496 NVM Subsystem Reset: Not Supported 00:22:27.496 Command Sets Supported 00:22:27.496 NVM Command Set: Supported 00:22:27.496 Boot Partition: Not Supported 00:22:27.496 Memory Page Size Minimum: 4096 bytes 00:22:27.496 Memory Page Size Maximum: 4096 bytes 00:22:27.496 Persistent Memory Region: Not Supported 00:22:27.496 Optional Asynchronous Events Supported 00:22:27.496 Namespace Attribute Notices: Supported 00:22:27.496 Firmware Activation Notices: Not Supported 00:22:27.496 ANA Change Notices: Not Supported 00:22:27.496 PLE Aggregate Log Change Notices: Not Supported 00:22:27.496 LBA Status Info Alert Notices: Not Supported 00:22:27.496 EGE Aggregate Log Change Notices: Not Supported 00:22:27.496 Normal NVM Subsystem Shutdown event: Not Supported 00:22:27.496 Zone Descriptor Change Notices: Not Supported 00:22:27.496 Discovery Log Change Notices: Not Supported 00:22:27.496 Controller Attributes 00:22:27.496 128-bit Host Identifier: Supported 00:22:27.496 Non-Operational Permissive Mode: Not Supported 00:22:27.496 NVM Sets: Not Supported 00:22:27.496 Read Recovery Levels: Not Supported 00:22:27.496 Endurance Groups: Not Supported 00:22:27.496 Predictable Latency Mode: Not Supported 00:22:27.496 Traffic Based Keep ALive: Not Supported 00:22:27.496 Namespace Granularity: Not Supported 00:22:27.496 SQ Associations: Not Supported 00:22:27.496 UUID List: Not Supported 00:22:27.496 Multi-Domain Subsystem: Not Supported 00:22:27.496 Fixed Capacity Management: Not Supported 00:22:27.496 Variable Capacity Management: Not Supported 00:22:27.496 Delete Endurance Group: Not Supported 00:22:27.496 Delete NVM Set: Not Supported 00:22:27.496 Extended LBA Formats Supported: Not Supported 00:22:27.496 Flexible Data Placement Supported: Not Supported 00:22:27.496 00:22:27.496 Controller Memory Buffer Support 00:22:27.496 ================================ 00:22:27.496 Supported: No 00:22:27.496 00:22:27.496 Persistent Memory Region Support 00:22:27.496 ================================ 00:22:27.496 Supported: No 00:22:27.496 00:22:27.496 Admin Command Set Attributes 00:22:27.496 ============================ 00:22:27.496 Security Send/Receive: Not Supported 00:22:27.496 Format NVM: Not Supported 00:22:27.496 Firmware Activate/Download: Not Supported 00:22:27.496 Namespace Management: Not Supported 00:22:27.496 Device Self-Test: Not Supported 00:22:27.496 Directives: Not Supported 00:22:27.496 NVMe-MI: Not Supported 00:22:27.496 Virtualization Management: Not Supported 00:22:27.496 Doorbell Buffer Config: Not Supported 00:22:27.496 Get LBA Status Capability: Not Supported 00:22:27.496 Command & Feature Lockdown Capability: Not Supported 00:22:27.496 Abort Command Limit: 4 00:22:27.496 Async Event Request Limit: 4 00:22:27.496 Number of Firmware Slots: N/A 00:22:27.496 Firmware Slot 1 Read-Only: N/A 00:22:27.496 Firmware Activation Without Reset: N/A 00:22:27.496 Multiple Update Detection Support: N/A 00:22:27.496 Firmware Update Granularity: No Information Provided 00:22:27.496 Per-Namespace SMART Log: No 00:22:27.496 Asymmetric Namespace Access Log Page: Not Supported 00:22:27.496 Subsystem NQN: nqn.2016-06.io.spdk:cnode1 00:22:27.496 Command Effects Log Page: Supported 00:22:27.496 Get Log Page Extended Data: Supported 00:22:27.496 Telemetry Log Pages: Not Supported 00:22:27.496 Persistent Event Log Pages: Not Supported 00:22:27.496 Supported Log Pages Log Page: May Support 00:22:27.496 Commands Supported & Effects Log Page: Not Supported 00:22:27.496 Feature Identifiers & Effects Log Page:May Support 00:22:27.496 NVMe-MI Commands & Effects Log Page: May Support 00:22:27.496 Data Area 4 for Telemetry Log: Not Supported 00:22:27.496 Error Log Page Entries Supported: 128 00:22:27.496 Keep Alive: Supported 00:22:27.496 Keep Alive Granularity: 10000 ms 00:22:27.496 00:22:27.496 NVM Command Set Attributes 00:22:27.496 ========================== 00:22:27.496 Submission Queue Entry Size 00:22:27.496 Max: 64 00:22:27.496 Min: 64 00:22:27.496 Completion Queue Entry Size 00:22:27.496 Max: 16 00:22:27.496 Min: 16 00:22:27.496 Number of Namespaces: 32 00:22:27.496 Compare Command: Supported 00:22:27.496 Write Uncorrectable Command: Not Supported 00:22:27.496 Dataset Management Command: Supported 00:22:27.496 Write Zeroes Command: Supported 00:22:27.496 Set Features Save Field: Not Supported 00:22:27.496 Reservations: Supported 00:22:27.496 Timestamp: Not Supported 00:22:27.496 Copy: Supported 00:22:27.496 Volatile Write Cache: Present 00:22:27.496 Atomic Write Unit (Normal): 1 00:22:27.496 Atomic Write Unit (PFail): 1 00:22:27.496 Atomic Compare & Write Unit: 1 00:22:27.496 Fused Compare & Write: Supported 00:22:27.496 Scatter-Gather List 00:22:27.496 SGL Command Set: Supported 00:22:27.496 SGL Keyed: Supported 00:22:27.496 SGL Bit Bucket Descriptor: Not Supported 00:22:27.496 SGL Metadata Pointer: Not Supported 00:22:27.496 Oversized SGL: Not Supported 00:22:27.496 SGL Metadata Address: Not Supported 00:22:27.496 SGL Offset: Supported 00:22:27.496 Transport SGL Data Block: Not Supported 00:22:27.496 Replay Protected Memory Block: Not Supported 00:22:27.496 00:22:27.496 Firmware Slot Information 00:22:27.496 ========================= 00:22:27.496 Active slot: 1 00:22:27.496 Slot 1 Firmware Revision: 24.09 00:22:27.496 00:22:27.496 00:22:27.496 Commands Supported and Effects 00:22:27.496 ============================== 00:22:27.496 Admin Commands 00:22:27.496 -------------- 00:22:27.496 Get Log Page (02h): Supported 00:22:27.496 Identify (06h): Supported 00:22:27.496 Abort (08h): Supported 00:22:27.496 Set Features (09h): Supported 00:22:27.496 Get Features (0Ah): Supported 00:22:27.496 Asynchronous Event Request (0Ch): Supported 00:22:27.496 Keep Alive (18h): Supported 00:22:27.496 I/O Commands 00:22:27.496 ------------ 00:22:27.496 Flush (00h): Supported LBA-Change 00:22:27.496 Write (01h): Supported LBA-Change 00:22:27.496 Read (02h): Supported 00:22:27.496 Compare (05h): Supported 00:22:27.496 Write Zeroes (08h): Supported LBA-Change 00:22:27.496 Dataset Management (09h): Supported LBA-Change 00:22:27.496 Copy (19h): Supported LBA-Change 00:22:27.496 00:22:27.496 Error Log 00:22:27.496 ========= 00:22:27.496 00:22:27.496 Arbitration 00:22:27.496 =========== 00:22:27.496 Arbitration Burst: 1 00:22:27.496 00:22:27.496 Power Management 00:22:27.496 ================ 00:22:27.496 Number of Power States: 1 00:22:27.496 Current Power State: Power State #0 00:22:27.496 Power State #0: 00:22:27.496 Max Power: 0.00 W 00:22:27.496 Non-Operational State: Operational 00:22:27.496 Entry Latency: Not Reported 00:22:27.496 Exit Latency: Not Reported 00:22:27.496 Relative Read Throughput: 0 00:22:27.496 Relative Read Latency: 0 00:22:27.496 Relative Write Throughput: 0 00:22:27.496 Relative Write Latency: 0 00:22:27.496 Idle Power: Not Reported 00:22:27.496 Active Power: Not Reported 00:22:27.496 Non-Operational Permissive Mode: Not Supported 00:22:27.496 00:22:27.496 Health Information 00:22:27.496 ================== 00:22:27.496 Critical Warnings: 00:22:27.496 Available Spare Space: OK 00:22:27.496 Temperature: OK 00:22:27.496 Device Reliability: OK 00:22:27.496 Read Only: No 00:22:27.496 Volatile Memory Backup: OK 00:22:27.496 Current Temperature: 0 Kelvin (-273 Celsius) 00:22:27.496 Temperature Threshold: 0 Kelvin (-273 Celsius) 00:22:27.496 Available Spare: 0% 00:22:27.496 Available Spare Threshold: 0% 00:22:27.496 Life Percentage Used:[2024-07-25 12:08:14.478424] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.496 [2024-07-25 12:08:14.478429] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=7 on tqpair(0xf28ec0) 00:22:27.496 [2024-07-25 12:08:14.478435] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES ERROR_RECOVERY cid:7 cdw10:00000005 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.496 [2024-07-25 12:08:14.478449] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac8c0, cid 7, qid 0 00:22:27.496 [2024-07-25 12:08:14.478703] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.496 [2024-07-25 12:08:14.478713] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.496 [2024-07-25 12:08:14.478716] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.496 [2024-07-25 12:08:14.478720] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac8c0) on tqpair=0xf28ec0 00:22:27.496 [2024-07-25 12:08:14.478751] nvme_ctrlr.c:4357:nvme_ctrlr_destruct_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] Prepare to destruct SSD 00:22:27.496 [2024-07-25 12:08:14.478760] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfabe40) on tqpair=0xf28ec0 00:22:27.497 [2024-07-25 12:08:14.478766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.497 [2024-07-25 12:08:14.478771] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfabfc0) on tqpair=0xf28ec0 00:22:27.497 [2024-07-25 12:08:14.478775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.497 [2024-07-25 12:08:14.478779] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac140) on tqpair=0xf28ec0 00:22:27.497 [2024-07-25 12:08:14.478783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.497 [2024-07-25 12:08:14.478788] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.497 [2024-07-25 12:08:14.478792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:22:27.497 [2024-07-25 12:08:14.478799] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.478802] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.478805] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.497 [2024-07-25 12:08:14.478811] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.497 [2024-07-25 12:08:14.478824] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.497 [2024-07-25 12:08:14.478998] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.497 [2024-07-25 12:08:14.479007] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.497 [2024-07-25 12:08:14.479010] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.479014] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.497 [2024-07-25 12:08:14.479021] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.479024] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.479027] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.497 [2024-07-25 12:08:14.479034] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY SET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.497 [2024-07-25 12:08:14.483059] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.497 [2024-07-25 12:08:14.483317] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.497 [2024-07-25 12:08:14.483327] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.497 [2024-07-25 12:08:14.483330] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.483337] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.497 [2024-07-25 12:08:14.483341] nvme_ctrlr.c:1147:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] RTD3E = 0 us 00:22:27.497 [2024-07-25 12:08:14.483345] nvme_ctrlr.c:1150:nvme_ctrlr_shutdown_set_cc_done: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown timeout = 10000 ms 00:22:27.497 [2024-07-25 12:08:14.483355] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.483359] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.483362] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.497 [2024-07-25 12:08:14.483369] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.497 [2024-07-25 12:08:14.483381] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.497 [2024-07-25 12:08:14.483533] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.497 [2024-07-25 12:08:14.483543] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.497 [2024-07-25 12:08:14.483546] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.483549] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.497 [2024-07-25 12:08:14.483560] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.483564] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.483567] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.497 [2024-07-25 12:08:14.483573] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.497 [2024-07-25 12:08:14.483585] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.497 [2024-07-25 12:08:14.483777] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.497 [2024-07-25 12:08:14.483787] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.497 [2024-07-25 12:08:14.483790] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.483793] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.497 [2024-07-25 12:08:14.483804] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.483808] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.483811] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.497 [2024-07-25 12:08:14.483817] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.497 [2024-07-25 12:08:14.483828] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.497 [2024-07-25 12:08:14.484022] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.497 [2024-07-25 12:08:14.484031] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.497 [2024-07-25 12:08:14.484034] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.484038] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.497 [2024-07-25 12:08:14.484055] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.484060] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.484063] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.497 [2024-07-25 12:08:14.484069] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.497 [2024-07-25 12:08:14.484081] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.497 [2024-07-25 12:08:14.484273] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.497 [2024-07-25 12:08:14.484285] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.497 [2024-07-25 12:08:14.484289] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.484292] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.497 [2024-07-25 12:08:14.484303] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.484307] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.484310] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.497 [2024-07-25 12:08:14.484316] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.497 [2024-07-25 12:08:14.484328] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.497 [2024-07-25 12:08:14.484484] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.497 [2024-07-25 12:08:14.484493] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.497 [2024-07-25 12:08:14.484496] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.484500] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.497 [2024-07-25 12:08:14.484511] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.484514] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.484517] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.497 [2024-07-25 12:08:14.484523] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.497 [2024-07-25 12:08:14.484535] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.497 [2024-07-25 12:08:14.484728] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.497 [2024-07-25 12:08:14.484738] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.497 [2024-07-25 12:08:14.484741] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.484744] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.497 [2024-07-25 12:08:14.484755] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.484758] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.484762] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.497 [2024-07-25 12:08:14.484768] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.497 [2024-07-25 12:08:14.484779] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.497 [2024-07-25 12:08:14.484972] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.497 [2024-07-25 12:08:14.484982] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.497 [2024-07-25 12:08:14.484985] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.484988] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.497 [2024-07-25 12:08:14.484999] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.485003] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.485006] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.497 [2024-07-25 12:08:14.485012] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.497 [2024-07-25 12:08:14.485023] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.497 [2024-07-25 12:08:14.485216] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.497 [2024-07-25 12:08:14.485227] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.497 [2024-07-25 12:08:14.485230] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.485236] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.497 [2024-07-25 12:08:14.485247] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.485251] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.497 [2024-07-25 12:08:14.485254] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.497 [2024-07-25 12:08:14.485260] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.497 [2024-07-25 12:08:14.485272] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.497 [2024-07-25 12:08:14.485419] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.498 [2024-07-25 12:08:14.485429] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.498 [2024-07-25 12:08:14.485432] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.485436] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.498 [2024-07-25 12:08:14.485447] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.485450] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.485453] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.498 [2024-07-25 12:08:14.485460] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.498 [2024-07-25 12:08:14.485472] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.498 [2024-07-25 12:08:14.485629] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.498 [2024-07-25 12:08:14.485638] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.498 [2024-07-25 12:08:14.485641] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.485645] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.498 [2024-07-25 12:08:14.485655] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.485659] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.485662] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.498 [2024-07-25 12:08:14.485668] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.498 [2024-07-25 12:08:14.485679] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.498 [2024-07-25 12:08:14.485840] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.498 [2024-07-25 12:08:14.485849] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.498 [2024-07-25 12:08:14.485852] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.485855] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.498 [2024-07-25 12:08:14.485867] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.485871] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.485874] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.498 [2024-07-25 12:08:14.485880] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.498 [2024-07-25 12:08:14.485891] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.498 [2024-07-25 12:08:14.486058] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.498 [2024-07-25 12:08:14.486068] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.498 [2024-07-25 12:08:14.486071] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.486075] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.498 [2024-07-25 12:08:14.486088] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.486092] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.486095] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.498 [2024-07-25 12:08:14.486101] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.498 [2024-07-25 12:08:14.486114] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.498 [2024-07-25 12:08:14.486260] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.498 [2024-07-25 12:08:14.486270] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.498 [2024-07-25 12:08:14.486273] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.486276] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.498 [2024-07-25 12:08:14.486287] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.486291] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.486294] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.498 [2024-07-25 12:08:14.486300] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.498 [2024-07-25 12:08:14.486312] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.498 [2024-07-25 12:08:14.486471] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.498 [2024-07-25 12:08:14.486481] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.498 [2024-07-25 12:08:14.486484] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.486487] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.498 [2024-07-25 12:08:14.486498] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.486502] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.486505] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.498 [2024-07-25 12:08:14.486511] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.498 [2024-07-25 12:08:14.486523] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.498 [2024-07-25 12:08:14.486687] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.498 [2024-07-25 12:08:14.486697] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.498 [2024-07-25 12:08:14.486699] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.486703] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.498 [2024-07-25 12:08:14.486714] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.486717] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.486720] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.498 [2024-07-25 12:08:14.486726] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.498 [2024-07-25 12:08:14.486738] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.498 [2024-07-25 12:08:14.486899] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.498 [2024-07-25 12:08:14.486908] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.498 [2024-07-25 12:08:14.486911] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.486915] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.498 [2024-07-25 12:08:14.486925] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.486932] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.486935] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.498 [2024-07-25 12:08:14.486941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.498 [2024-07-25 12:08:14.486953] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.498 [2024-07-25 12:08:14.491051] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.498 [2024-07-25 12:08:14.491063] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.498 [2024-07-25 12:08:14.491066] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.491070] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.498 [2024-07-25 12:08:14.491081] nvme_tcp.c: 790:nvme_tcp_build_contig_request: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.491085] nvme_tcp.c: 967:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.491088] nvme_tcp.c: 976:nvme_tcp_qpair_capsule_cmd_send: *DEBUG*: capsule_cmd cid=3 on tqpair(0xf28ec0) 00:22:27.498 [2024-07-25 12:08:14.491095] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC PROPERTY GET qid:0 cid:3 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:22:27.498 [2024-07-25 12:08:14.491108] nvme_tcp.c: 941:nvme_tcp_qpair_cmd_send_complete: *DEBUG*: tcp req 0xfac2c0, cid 3, qid 0 00:22:27.498 [2024-07-25 12:08:14.491344] nvme_tcp.c:1187:nvme_tcp_pdu_ch_handle: *DEBUG*: pdu type = 5 00:22:27.498 [2024-07-25 12:08:14.491353] nvme_tcp.c:1975:nvme_tcp_pdu_psh_handle: *DEBUG*: enter: pdu type =5 00:22:27.498 [2024-07-25 12:08:14.491356] nvme_tcp.c:1648:nvme_tcp_capsule_resp_hdr_handle: *DEBUG*: enter 00:22:27.498 [2024-07-25 12:08:14.491360] nvme_tcp.c:1069:nvme_tcp_req_complete: *DEBUG*: complete tcp_req(0xfac2c0) on tqpair=0xf28ec0 00:22:27.498 [2024-07-25 12:08:14.491369] nvme_ctrlr.c:1269:nvme_ctrlr_shutdown_poll_async: *DEBUG*: [nqn.2016-06.io.spdk:cnode1] shutdown complete in 8 milliseconds 00:22:27.498 0% 00:22:27.498 Data Units Read: 0 00:22:27.498 Data Units Written: 0 00:22:27.498 Host Read Commands: 0 00:22:27.498 Host Write Commands: 0 00:22:27.498 Controller Busy Time: 0 minutes 00:22:27.498 Power Cycles: 0 00:22:27.498 Power On Hours: 0 hours 00:22:27.498 Unsafe Shutdowns: 0 00:22:27.498 Unrecoverable Media Errors: 0 00:22:27.498 Lifetime Error Log Entries: 0 00:22:27.498 Warning Temperature Time: 0 minutes 00:22:27.498 Critical Temperature Time: 0 minutes 00:22:27.498 00:22:27.498 Number of Queues 00:22:27.498 ================ 00:22:27.498 Number of I/O Submission Queues: 127 00:22:27.498 Number of I/O Completion Queues: 127 00:22:27.498 00:22:27.498 Active Namespaces 00:22:27.498 ================= 00:22:27.498 Namespace ID:1 00:22:27.498 Error Recovery Timeout: Unlimited 00:22:27.498 Command Set Identifier: NVM (00h) 00:22:27.498 Deallocate: Supported 00:22:27.498 Deallocated/Unwritten Error: Not Supported 00:22:27.498 Deallocated Read Value: Unknown 00:22:27.498 Deallocate in Write Zeroes: Not Supported 00:22:27.498 Deallocated Guard Field: 0xFFFF 00:22:27.498 Flush: Supported 00:22:27.498 Reservation: Supported 00:22:27.498 Namespace Sharing Capabilities: Multiple Controllers 00:22:27.498 Size (in LBAs): 131072 (0GiB) 00:22:27.498 Capacity (in LBAs): 131072 (0GiB) 00:22:27.498 Utilization (in LBAs): 131072 (0GiB) 00:22:27.498 NGUID: ABCDEF0123456789ABCDEF0123456789 00:22:27.498 EUI64: ABCDEF0123456789 00:22:27.498 UUID: 7d1cf4d7-83b5-4e1c-9709-5ba9d8dd30fb 00:22:27.498 Thin Provisioning: Not Supported 00:22:27.498 Per-NS Atomic Units: Yes 00:22:27.498 Atomic Boundary Size (Normal): 0 00:22:27.498 Atomic Boundary Size (PFail): 0 00:22:27.499 Atomic Boundary Offset: 0 00:22:27.499 Maximum Single Source Range Length: 65535 00:22:27.499 Maximum Copy Length: 65535 00:22:27.499 Maximum Source Range Count: 1 00:22:27.499 NGUID/EUI64 Never Reused: No 00:22:27.499 Namespace Write Protected: No 00:22:27.499 Number of LBA Formats: 1 00:22:27.499 Current LBA Format: LBA Format #00 00:22:27.499 LBA Format #00: Data Size: 512 Metadata Size: 0 00:22:27.499 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@51 -- # sync 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@52 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@559 -- # xtrace_disable 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@54 -- # trap - SIGINT SIGTERM EXIT 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- host/identify.sh@56 -- # nvmftestfini 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@117 -- # sync 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@120 -- # set +e 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:27.499 rmmod nvme_tcp 00:22:27.499 rmmod nvme_fabrics 00:22:27.499 rmmod nvme_keyring 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@124 -- # set -e 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@125 -- # return 0 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@489 -- # '[' -n 406936 ']' 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@490 -- # killprocess 406936 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@948 -- # '[' -z 406936 ']' 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@952 -- # kill -0 406936 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # uname 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 406936 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@966 -- # echo 'killing process with pid 406936' 00:22:27.499 killing process with pid 406936 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@967 -- # kill 406936 00:22:27.499 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@972 -- # wait 406936 00:22:27.758 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:27.758 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:27.758 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:27.758 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:27.758 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:27.758 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:27.759 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:27.759 12:08:14 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.665 12:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:29.665 00:22:29.665 real 0m9.184s 00:22:29.665 user 0m7.247s 00:22:29.665 sys 0m4.470s 00:22:29.665 12:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:29.665 12:08:16 nvmf_tcp.nvmf_host.nvmf_identify -- common/autotest_common.sh@10 -- # set +x 00:22:29.665 ************************************ 00:22:29.665 END TEST nvmf_identify 00:22:29.665 ************************************ 00:22:29.665 12:08:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:22:29.665 12:08:16 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@23 -- # run_test nvmf_perf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:29.665 12:08:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:29.665 12:08:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:29.665 12:08:16 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:29.925 ************************************ 00:22:29.925 START TEST nvmf_perf 00:22:29.925 ************************************ 00:22:29.925 12:08:16 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/perf.sh --transport=tcp 00:22:29.925 * Looking for test storage... 00:22:29.925 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # uname -s 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@5 -- # export PATH 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@47 -- # : 0 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@12 -- # MALLOC_BDEV_SIZE=64 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@17 -- # nvmftestinit 00:22:29.925 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:29.926 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:29.926 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:29.926 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:29.926 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:29.926 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:29.926 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:29.926 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:29.926 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:29.926 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:29.926 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@285 -- # xtrace_disable 00:22:29.926 12:08:17 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # pci_devs=() 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # net_devs=() 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # e810=() 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@296 -- # local -ga e810 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # x722=() 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@297 -- # local -ga x722 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # mlx=() 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@298 -- # local -ga mlx 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:35.205 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:35.205 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:35.205 Found net devices under 0000:86:00.0: cvl_0_0 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:35.205 Found net devices under 0000:86:00.1: cvl_0_1 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:35.205 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@414 -- # is_hw=yes 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:35.206 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:35.206 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.189 ms 00:22:35.206 00:22:35.206 --- 10.0.0.2 ping statistics --- 00:22:35.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.206 rtt min/avg/max/mdev = 0.189/0.189/0.189/0.000 ms 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:35.206 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:35.206 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.407 ms 00:22:35.206 00:22:35.206 --- 10.0.0.1 ping statistics --- 00:22:35.206 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:35.206 rtt min/avg/max/mdev = 0.407/0.407/0.407/0.000 ms 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@422 -- # return 0 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@18 -- # nvmfappstart -m 0xF 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@481 -- # nvmfpid=410501 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@482 -- # waitforlisten 410501 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@829 -- # '[' -z 410501 ']' 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:35.206 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:35.206 12:08:22 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:35.467 [2024-07-25 12:08:22.480479] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:22:35.467 [2024-07-25 12:08:22.480526] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:35.467 EAL: No free 2048 kB hugepages reported on node 1 00:22:35.467 [2024-07-25 12:08:22.542553] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:35.467 [2024-07-25 12:08:22.622763] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:35.467 [2024-07-25 12:08:22.622800] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:35.467 [2024-07-25 12:08:22.622807] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:35.467 [2024-07-25 12:08:22.622816] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:35.467 [2024-07-25 12:08:22.622821] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:35.467 [2024-07-25 12:08:22.622861] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:35.467 [2024-07-25 12:08:22.622972] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:35.467 [2024-07-25 12:08:22.623063] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:35.467 [2024-07-25 12:08:22.623065] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:22:36.036 12:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:22:36.036 12:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@862 -- # return 0 00:22:36.036 12:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:22:36.036 12:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@728 -- # xtrace_disable 00:22:36.036 12:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:36.295 12:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:22:36.295 12:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:22:36.295 12:08:23 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py load_subsystem_config 00:22:39.588 12:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py framework_get_config bdev 00:22:39.588 12:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # jq -r '.[].params | select(.name=="Nvme0").traddr' 00:22:39.588 12:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@30 -- # local_nvme_trid=0000:5e:00.0 00:22:39.588 12:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 00:22:39.588 12:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@31 -- # bdevs=' Malloc0' 00:22:39.588 12:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@33 -- # '[' -n 0000:5e:00.0 ']' 00:22:39.588 12:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@34 -- # bdevs=' Malloc0 Nvme0n1' 00:22:39.588 12:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@37 -- # '[' tcp == rdma ']' 00:22:39.588 12:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o 00:22:39.847 [2024-07-25 12:08:26.879647] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:22:39.847 12:08:26 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:22:39.847 12:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:39.847 12:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:22:40.106 12:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@45 -- # for bdev in $bdevs 00:22:40.106 12:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@46 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:22:40.365 12:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:22:40.625 [2024-07-25 12:08:27.627189] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:22:40.625 12:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@49 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:22:40.625 12:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@52 -- # '[' -n 0000:5e:00.0 ']' 00:22:40.625 12:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@53 -- # perf_app -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:40.625 12:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@21 -- # '[' 0 -eq 1 ']' 00:22:40.625 12:08:27 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -i 0 -q 32 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:PCIe traddr:0000:5e:00.0' 00:22:42.042 Initializing NVMe Controllers 00:22:42.042 Attached to NVMe Controller at 0000:5e:00.0 [8086:0a54] 00:22:42.042 Associating PCIE (0000:5e:00.0) NSID 1 with lcore 0 00:22:42.042 Initialization complete. Launching workers. 00:22:42.042 ======================================================== 00:22:42.042 Latency(us) 00:22:42.042 Device Information : IOPS MiB/s Average min max 00:22:42.042 PCIE (0000:5e:00.0) NSID 1 from core 0: 97484.87 380.80 327.77 30.12 7244.19 00:22:42.042 ======================================================== 00:22:42.042 Total : 97484.87 380.80 327.77 30.12 7244.19 00:22:42.042 00:22:42.042 12:08:29 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 1 -o 4096 -w randrw -M 50 -t 1 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:42.042 EAL: No free 2048 kB hugepages reported on node 1 00:22:43.441 Initializing NVMe Controllers 00:22:43.441 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:43.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:43.441 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:43.441 Initialization complete. Launching workers. 00:22:43.441 ======================================================== 00:22:43.441 Latency(us) 00:22:43.441 Device Information : IOPS MiB/s Average min max 00:22:43.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 125.93 0.49 8110.05 629.31 45889.39 00:22:43.441 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 66.96 0.26 15647.60 5988.15 47897.84 00:22:43.441 ======================================================== 00:22:43.441 Total : 192.90 0.75 10726.71 629.31 47897.84 00:22:43.441 00:22:43.441 12:08:30 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 32 -o 4096 -w randrw -M 50 -t 1 -HI -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:43.441 EAL: No free 2048 kB hugepages reported on node 1 00:22:44.821 Initializing NVMe Controllers 00:22:44.821 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:44.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:44.821 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:44.821 Initialization complete. Launching workers. 00:22:44.821 ======================================================== 00:22:44.821 Latency(us) 00:22:44.821 Device Information : IOPS MiB/s Average min max 00:22:44.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 7649.90 29.88 4199.83 787.85 8841.74 00:22:44.821 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 3795.95 14.83 8467.35 6691.02 15982.96 00:22:44.821 ======================================================== 00:22:44.821 Total : 11445.85 44.71 5615.13 787.85 15982.96 00:22:44.821 00:22:44.821 12:08:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ e810 == \e\8\1\0 ]] 00:22:44.821 12:08:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@59 -- # [[ tcp == \r\d\m\a ]] 00:22:44.821 12:08:31 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -O 16384 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:22:44.821 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.362 Initializing NVMe Controllers 00:22:47.362 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:47.362 Controller IO queue size 128, less than required. 00:22:47.362 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.362 Controller IO queue size 128, less than required. 00:22:47.362 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:47.362 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:47.363 Initialization complete. Launching workers. 00:22:47.363 ======================================================== 00:22:47.363 Latency(us) 00:22:47.363 Device Information : IOPS MiB/s Average min max 00:22:47.363 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 806.99 201.75 165013.36 103335.15 292035.89 00:22:47.363 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 583.99 146.00 232586.69 124372.90 378885.44 00:22:47.363 ======================================================== 00:22:47.363 Total : 1390.98 347.74 193383.47 103335.15 378885.44 00:22:47.363 00:22:47.363 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 36964 -O 4096 -w randrw -M 50 -t 5 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' -c 0xf -P 4 00:22:47.363 EAL: No free 2048 kB hugepages reported on node 1 00:22:47.622 No valid NVMe controllers or AIO or URING devices found 00:22:47.622 Initializing NVMe Controllers 00:22:47.622 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:47.622 Controller IO queue size 128, less than required. 00:22:47.622 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.622 WARNING: IO size 36964 (-o) is not a multiple of nsid 1 sector size 512. Removing this ns from test 00:22:47.622 Controller IO queue size 128, less than required. 00:22:47.622 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:47.622 WARNING: IO size 36964 (-o) is not a multiple of nsid 2 sector size 512. Removing this ns from test 00:22:47.622 WARNING: Some requested NVMe devices were skipped 00:22:47.622 12:08:34 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_perf -q 128 -o 262144 -w randrw -M 50 -t 2 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' --transport-stat 00:22:47.622 EAL: No free 2048 kB hugepages reported on node 1 00:22:50.161 Initializing NVMe Controllers 00:22:50.161 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:22:50.161 Controller IO queue size 128, less than required. 00:22:50.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:50.161 Controller IO queue size 128, less than required. 00:22:50.161 Consider using lower queue depth or smaller IO size, because IO requests may be queued at the NVMe driver. 00:22:50.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 with lcore 0 00:22:50.161 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 with lcore 0 00:22:50.161 Initialization complete. Launching workers. 00:22:50.161 00:22:50.161 ==================== 00:22:50.161 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 statistics: 00:22:50.161 TCP transport: 00:22:50.161 polls: 56052 00:22:50.161 idle_polls: 18277 00:22:50.161 sock_completions: 37775 00:22:50.161 nvme_completions: 3055 00:22:50.161 submitted_requests: 4660 00:22:50.161 queued_requests: 1 00:22:50.161 00:22:50.161 ==================== 00:22:50.161 lcore 0, ns TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 statistics: 00:22:50.161 TCP transport: 00:22:50.161 polls: 64582 00:22:50.161 idle_polls: 23250 00:22:50.161 sock_completions: 41332 00:22:50.161 nvme_completions: 2341 00:22:50.161 submitted_requests: 3498 00:22:50.161 queued_requests: 1 00:22:50.161 ======================================================== 00:22:50.161 Latency(us) 00:22:50.161 Device Information : IOPS MiB/s Average min max 00:22:50.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 1 from core 0: 762.96 190.74 171891.16 134841.39 287264.97 00:22:50.161 TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) NSID 2 from core 0: 584.59 146.15 228158.80 100318.99 378410.81 00:22:50.161 ======================================================== 00:22:50.161 Total : 1347.55 336.89 196300.93 100318.99 378410.81 00:22:50.161 00:22:50.161 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@66 -- # sync 00:22:50.161 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@67 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@69 -- # '[' 0 -eq 1 ']' 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@112 -- # trap - SIGINT SIGTERM EXIT 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- host/perf.sh@114 -- # nvmftestfini 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@488 -- # nvmfcleanup 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@117 -- # sync 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@120 -- # set +e 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@121 -- # for i in {1..20} 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:22:50.421 rmmod nvme_tcp 00:22:50.421 rmmod nvme_fabrics 00:22:50.421 rmmod nvme_keyring 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@124 -- # set -e 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@125 -- # return 0 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@489 -- # '[' -n 410501 ']' 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@490 -- # killprocess 410501 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@948 -- # '[' -z 410501 ']' 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@952 -- # kill -0 410501 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # uname 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 410501 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 410501' 00:22:50.421 killing process with pid 410501 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@967 -- # kill 410501 00:22:50.421 12:08:37 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@972 -- # wait 410501 00:22:52.329 12:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:22:52.329 12:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:22:52.329 12:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:22:52.329 12:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:22:52.329 12:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:22:52.329 12:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:52.329 12:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:52.329 12:08:39 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.239 12:08:41 nvmf_tcp.nvmf_host.nvmf_perf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:22:54.239 00:22:54.239 real 0m24.217s 00:22:54.239 user 1m6.333s 00:22:54.239 sys 0m6.574s 00:22:54.239 12:08:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:22:54.239 12:08:41 nvmf_tcp.nvmf_host.nvmf_perf -- common/autotest_common.sh@10 -- # set +x 00:22:54.239 ************************************ 00:22:54.239 END TEST nvmf_perf 00:22:54.239 ************************************ 00:22:54.239 12:08:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:22:54.239 12:08:41 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@24 -- # run_test nvmf_fio_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:54.239 12:08:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:22:54.239 12:08:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:22:54.239 12:08:41 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:22:54.239 ************************************ 00:22:54.239 START TEST nvmf_fio_host 00:22:54.239 ************************************ 00:22:54.239 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/fio.sh --transport=tcp 00:22:54.239 * Looking for test storage... 00:22:54.239 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:22:54.239 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:54.239 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.239 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.239 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.239 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.239 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.239 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.239 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # uname -s 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@5 -- # export PATH 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@47 -- # : 0 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@12 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@14 -- # nvmftestinit 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@285 -- # xtrace_disable 00:22:54.240 12:08:41 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # pci_devs=() 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # net_devs=() 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # e810=() 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@296 -- # local -ga e810 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # x722=() 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@297 -- # local -ga x722 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # mlx=() 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@298 -- # local -ga mlx 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:22:59.523 Found 0000:86:00.0 (0x8086 - 0x159b) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:22:59.523 Found 0000:86:00.1 (0x8086 - 0x159b) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:22:59.523 Found net devices under 0000:86:00.0: cvl_0_0 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:22:59.523 Found net devices under 0000:86:00.1: cvl_0_1 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@414 -- # is_hw=yes 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:22:59.523 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:22:59.523 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:22:59.523 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.162 ms 00:22:59.524 00:22:59.524 --- 10.0.0.2 ping statistics --- 00:22:59.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.524 rtt min/avg/max/mdev = 0.162/0.162/0.162/0.000 ms 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:22:59.524 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:22:59.524 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.274 ms 00:22:59.524 00:22:59.524 --- 10.0.0.1 ping statistics --- 00:22:59.524 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:22:59.524 rtt min/avg/max/mdev = 0.274/0.274/0.274/0.000 ms 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@422 -- # return 0 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@16 -- # [[ y != y ]] 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@21 -- # timing_enter start_nvmf_tgt 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@23 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@24 -- # nvmfpid=416702 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@26 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@28 -- # waitforlisten 416702 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@829 -- # '[' -z 416702 ']' 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:22:59.524 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:22:59.524 12:08:46 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:22:59.784 [2024-07-25 12:08:46.802359] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:22:59.784 [2024-07-25 12:08:46.802405] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:22:59.784 EAL: No free 2048 kB hugepages reported on node 1 00:22:59.784 [2024-07-25 12:08:46.858685] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:22:59.784 [2024-07-25 12:08:46.933509] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:22:59.784 [2024-07-25 12:08:46.933550] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:22:59.784 [2024-07-25 12:08:46.933557] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:22:59.784 [2024-07-25 12:08:46.933564] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:22:59.784 [2024-07-25 12:08:46.933569] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:22:59.784 [2024-07-25 12:08:46.933609] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:22:59.784 [2024-07-25 12:08:46.933627] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:22:59.784 [2024-07-25 12:08:46.933718] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:22:59.784 [2024-07-25 12:08:46.933719] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:00.721 12:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:00.721 12:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@862 -- # return 0 00:23:00.721 12:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:00.721 [2024-07-25 12:08:47.783674] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:00.721 12:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@30 -- # timing_exit start_nvmf_tgt 00:23:00.721 12:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:00.721 12:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:00.721 12:08:47 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc1 00:23:00.980 Malloc1 00:23:00.980 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@33 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:01.240 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc1 00:23:01.240 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:01.500 [2024-07-25 12:08:48.586087] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:01.500 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:23:01.761 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@38 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:23:01.761 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@41 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:01.761 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:01.761 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:01.761 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:01.761 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:01.761 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:01.761 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:01.761 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:01.761 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:01.762 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:01.762 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:01.762 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:01.762 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:01.762 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:01.762 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:01.762 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:01.762 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:01.762 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:01.762 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:01.762 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:01.762 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:01.762 12:08:48 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/example_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' --bs=4096 00:23:02.021 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:23:02.021 fio-3.35 00:23:02.021 Starting 1 thread 00:23:02.021 EAL: No free 2048 kB hugepages reported on node 1 00:23:04.591 00:23:04.591 test: (groupid=0, jobs=1): err= 0: pid=417182: Thu Jul 25 12:08:51 2024 00:23:04.591 read: IOPS=11.3k, BW=44.1MiB/s (46.2MB/s)(88.4MiB/2005msec) 00:23:04.591 slat (nsec): min=1610, max=246236, avg=1774.99, stdev=2278.38 00:23:04.591 clat (usec): min=3543, max=19192, avg=6638.59, stdev=1482.03 00:23:04.591 lat (usec): min=3545, max=19198, avg=6640.36, stdev=1482.26 00:23:04.591 clat percentiles (usec): 00:23:04.591 | 1.00th=[ 4424], 5.00th=[ 5014], 10.00th=[ 5342], 20.00th=[ 5669], 00:23:04.591 | 30.00th=[ 5932], 40.00th=[ 6128], 50.00th=[ 6325], 60.00th=[ 6521], 00:23:04.591 | 70.00th=[ 6849], 80.00th=[ 7308], 90.00th=[ 8225], 95.00th=[ 9241], 00:23:04.591 | 99.00th=[13042], 99.50th=[13698], 99.90th=[16450], 99.95th=[17957], 00:23:04.591 | 99.99th=[18482] 00:23:04.591 bw ( KiB/s): min=44352, max=46024, per=99.90%, avg=45082.00, stdev=718.88, samples=4 00:23:04.591 iops : min=11088, max=11506, avg=11270.50, stdev=179.72, samples=4 00:23:04.591 write: IOPS=11.2k, BW=43.8MiB/s (46.0MB/s)(87.9MiB/2005msec); 0 zone resets 00:23:04.591 slat (nsec): min=1669, max=233983, avg=1872.32, stdev=1726.17 00:23:04.591 clat (usec): min=2156, max=17176, avg=4689.67, stdev=976.66 00:23:04.591 lat (usec): min=2158, max=17410, avg=4691.54, stdev=977.08 00:23:04.591 clat percentiles (usec): 00:23:04.591 | 1.00th=[ 2933], 5.00th=[ 3359], 10.00th=[ 3654], 20.00th=[ 4047], 00:23:04.591 | 30.00th=[ 4293], 40.00th=[ 4490], 50.00th=[ 4686], 60.00th=[ 4817], 00:23:04.591 | 70.00th=[ 5014], 80.00th=[ 5145], 90.00th=[ 5473], 95.00th=[ 5997], 00:23:04.591 | 99.00th=[ 8029], 99.50th=[ 8848], 99.90th=[14615], 99.95th=[15795], 00:23:04.591 | 99.99th=[16712] 00:23:04.591 bw ( KiB/s): min=44032, max=45624, per=100.00%, avg=44898.00, stdev=691.57, samples=4 00:23:04.591 iops : min=11008, max=11406, avg=11224.50, stdev=172.89, samples=4 00:23:04.591 lat (msec) : 4=9.03%, 10=88.98%, 20=1.99% 00:23:04.591 cpu : usr=70.26%, sys=23.80%, ctx=31, majf=0, minf=5 00:23:04.591 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:23:04.591 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:04.591 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:04.591 issued rwts: total=22619,22498,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:04.591 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:04.591 00:23:04.591 Run status group 0 (all jobs): 00:23:04.591 READ: bw=44.1MiB/s (46.2MB/s), 44.1MiB/s-44.1MiB/s (46.2MB/s-46.2MB/s), io=88.4MiB (92.6MB), run=2005-2005msec 00:23:04.591 WRITE: bw=43.8MiB/s (46.0MB/s), 43.8MiB/s-43.8MiB/s (46.0MB/s-46.0MB/s), io=87.9MiB (92.2MB), run=2005-2005msec 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@45 -- # fio_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1360 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1339 -- # local sanitizers 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1341 -- # shift 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1343 -- # local asan_lib= 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libasan 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1345 -- # asan_lib= 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_nvme' 00:23:04.591 12:08:51 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme/mock_sgl_config.fio '--filename=trtype=tcp adrfam=IPv4 traddr=10.0.0.2 trsvcid=4420 ns=1' 00:23:04.592 test: (g=0): rw=randrw, bs=(R) 16.0KiB-16.0KiB, (W) 16.0KiB-16.0KiB, (T) 16.0KiB-16.0KiB, ioengine=spdk, iodepth=128 00:23:04.592 fio-3.35 00:23:04.592 Starting 1 thread 00:23:04.592 EAL: No free 2048 kB hugepages reported on node 1 00:23:07.133 00:23:07.133 test: (groupid=0, jobs=1): err= 0: pid=417749: Thu Jul 25 12:08:54 2024 00:23:07.133 read: IOPS=8773, BW=137MiB/s (144MB/s)(275MiB/2007msec) 00:23:07.133 slat (usec): min=2, max=102, avg= 2.93, stdev= 1.50 00:23:07.133 clat (usec): min=3087, max=51709, avg=9212.91, stdev=4696.35 00:23:07.133 lat (usec): min=3090, max=51711, avg=9215.84, stdev=4696.61 00:23:07.133 clat percentiles (usec): 00:23:07.133 | 1.00th=[ 4293], 5.00th=[ 5342], 10.00th=[ 5932], 20.00th=[ 6652], 00:23:07.133 | 30.00th=[ 7242], 40.00th=[ 7832], 50.00th=[ 8455], 60.00th=[ 8979], 00:23:07.133 | 70.00th=[ 9634], 80.00th=[10552], 90.00th=[12125], 95.00th=[14353], 00:23:07.133 | 99.00th=[28967], 99.50th=[46924], 99.90th=[50594], 99.95th=[51119], 00:23:07.133 | 99.99th=[51643] 00:23:07.133 bw ( KiB/s): min=58624, max=82144, per=49.47%, avg=69448.00, stdev=10233.61, samples=4 00:23:07.133 iops : min= 3664, max= 5134, avg=4340.50, stdev=639.60, samples=4 00:23:07.133 write: IOPS=5146, BW=80.4MiB/s (84.3MB/s)(141MiB/1752msec); 0 zone resets 00:23:07.133 slat (usec): min=30, max=384, avg=32.59, stdev= 8.25 00:23:07.133 clat (usec): min=3384, max=37873, avg=9602.47, stdev=3578.99 00:23:07.133 lat (usec): min=3415, max=37918, avg=9635.07, stdev=3582.82 00:23:07.133 clat percentiles (usec): 00:23:07.133 | 1.00th=[ 6390], 5.00th=[ 6915], 10.00th=[ 7242], 20.00th=[ 7701], 00:23:07.133 | 30.00th=[ 8160], 40.00th=[ 8586], 50.00th=[ 8979], 60.00th=[ 9372], 00:23:07.133 | 70.00th=[ 9765], 80.00th=[10421], 90.00th=[11338], 95.00th=[12518], 00:23:07.133 | 99.00th=[29492], 99.50th=[30540], 99.90th=[34341], 99.95th=[35390], 00:23:07.133 | 99.99th=[38011] 00:23:07.133 bw ( KiB/s): min=61536, max=85504, per=87.60%, avg=72136.00, stdev=10398.49, samples=4 00:23:07.133 iops : min= 3846, max= 5344, avg=4508.50, stdev=649.91, samples=4 00:23:07.133 lat (msec) : 4=0.38%, 10=73.74%, 20=23.23%, 50=2.53%, 100=0.12% 00:23:07.133 cpu : usr=84.55%, sys=12.21%, ctx=114, majf=0, minf=2 00:23:07.133 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.2%, 16=0.4%, 32=0.7%, >=64=98.6% 00:23:07.133 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:23:07.133 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:23:07.133 issued rwts: total=17609,9017,0,0 short=0,0,0,0 dropped=0,0,0,0 00:23:07.134 latency : target=0, window=0, percentile=100.00%, depth=128 00:23:07.134 00:23:07.134 Run status group 0 (all jobs): 00:23:07.134 READ: bw=137MiB/s (144MB/s), 137MiB/s-137MiB/s (144MB/s-144MB/s), io=275MiB (289MB), run=2007-2007msec 00:23:07.134 WRITE: bw=80.4MiB/s (84.3MB/s), 80.4MiB/s-80.4MiB/s (84.3MB/s-84.3MB/s), io=141MiB (148MB), run=1752-1752msec 00:23:07.134 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:07.134 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@49 -- # '[' 0 -eq 1 ']' 00:23:07.134 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:23:07.134 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@85 -- # rm -f ./local-test-0-verify.state 00:23:07.134 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- host/fio.sh@86 -- # nvmftestfini 00:23:07.134 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:07.134 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@117 -- # sync 00:23:07.134 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:07.134 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@120 -- # set +e 00:23:07.134 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:07.134 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:07.134 rmmod nvme_tcp 00:23:07.134 rmmod nvme_fabrics 00:23:07.134 rmmod nvme_keyring 00:23:07.392 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:07.392 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@124 -- # set -e 00:23:07.392 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@125 -- # return 0 00:23:07.392 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@489 -- # '[' -n 416702 ']' 00:23:07.392 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@490 -- # killprocess 416702 00:23:07.392 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@948 -- # '[' -z 416702 ']' 00:23:07.392 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@952 -- # kill -0 416702 00:23:07.392 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # uname 00:23:07.392 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:07.392 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 416702 00:23:07.392 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:07.392 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:07.392 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 416702' 00:23:07.392 killing process with pid 416702 00:23:07.392 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@967 -- # kill 416702 00:23:07.393 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@972 -- # wait 416702 00:23:07.651 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:07.651 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:07.651 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:07.651 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:07.651 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:07.651 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:07.651 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:07.651 12:08:54 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.556 12:08:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:09.556 00:23:09.556 real 0m15.493s 00:23:09.556 user 0m47.694s 00:23:09.556 sys 0m5.917s 00:23:09.556 12:08:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:09.556 12:08:56 nvmf_tcp.nvmf_host.nvmf_fio_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.556 ************************************ 00:23:09.556 END TEST nvmf_fio_host 00:23:09.556 ************************************ 00:23:09.556 12:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:23:09.556 12:08:56 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@25 -- # run_test nvmf_failover /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:09.556 12:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:09.556 12:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:09.556 12:08:56 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:09.556 ************************************ 00:23:09.556 START TEST nvmf_failover 00:23:09.556 ************************************ 00:23:09.556 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/failover.sh --transport=tcp 00:23:09.815 * Looking for test storage... 00:23:09.815 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # uname -s 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:09.815 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@5 -- # export PATH 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@47 -- # : 0 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@11 -- # MALLOC_BDEV_SIZE=64 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@14 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@16 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@18 -- # nvmftestinit 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@285 -- # xtrace_disable 00:23:09.816 12:08:56 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # pci_devs=() 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # net_devs=() 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # e810=() 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@296 -- # local -ga e810 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # x722=() 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@297 -- # local -ga x722 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # mlx=() 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@298 -- # local -ga mlx 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:15.093 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:15.093 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:15.093 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:15.094 Found net devices under 0000:86:00.0: cvl_0_0 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:15.094 Found net devices under 0000:86:00.1: cvl_0_1 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@414 -- # is_hw=yes 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:15.094 12:09:01 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:15.094 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:15.094 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.173 ms 00:23:15.094 00:23:15.094 --- 10.0.0.2 ping statistics --- 00:23:15.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.094 rtt min/avg/max/mdev = 0.173/0.173/0.173/0.000 ms 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:15.094 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:15.094 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.394 ms 00:23:15.094 00:23:15.094 --- 10.0.0.1 ping statistics --- 00:23:15.094 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:15.094 rtt min/avg/max/mdev = 0.394/0.394/0.394/0.000 ms 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@422 -- # return 0 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@20 -- # nvmfappstart -m 0xE 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@481 -- # nvmfpid=421731 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@482 -- # waitforlisten 421731 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 421731 ']' 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:15.094 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:15.094 12:09:02 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:15.094 [2024-07-25 12:09:02.236102] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:23:15.094 [2024-07-25 12:09:02.236144] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:15.094 EAL: No free 2048 kB hugepages reported on node 1 00:23:15.094 [2024-07-25 12:09:02.293903] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:23:15.353 [2024-07-25 12:09:02.374844] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:15.353 [2024-07-25 12:09:02.374879] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:15.353 [2024-07-25 12:09:02.374886] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:15.353 [2024-07-25 12:09:02.374892] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:15.353 [2024-07-25 12:09:02.374897] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:15.353 [2024-07-25 12:09:02.374997] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:23:15.353 [2024-07-25 12:09:02.375083] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:23:15.353 [2024-07-25 12:09:02.375085] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:15.921 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:15.921 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:15.921 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:15.921 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:15.921 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:15.921 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:15.921 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@22 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:23:16.180 [2024-07-25 12:09:03.236027] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:16.180 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:23:16.438 Malloc0 00:23:16.438 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:23:16.438 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@25 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:23:16.697 12:09:03 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@26 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:16.956 [2024-07-25 12:09:04.000540] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:16.956 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:16.956 [2024-07-25 12:09:04.189064] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:17.216 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@28 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:17.216 [2024-07-25 12:09:04.369617] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:17.216 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@30 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 15 -f 00:23:17.216 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@31 -- # bdevperf_pid=422097 00:23:17.216 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; cat $testdir/try.txt; rm -f $testdir/try.txt; killprocess $bdevperf_pid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:17.216 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@34 -- # waitforlisten 422097 /var/tmp/bdevperf.sock 00:23:17.216 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 422097 ']' 00:23:17.216 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:17.216 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:17.216 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:17.216 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:17.216 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:17.216 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:17.475 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:17.475 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:17.475 12:09:04 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@35 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:18.043 NVMe0n1 00:23:18.043 12:09:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:18.302 00:23:18.302 12:09:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@39 -- # run_test_pid=422326 00:23:18.302 12:09:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@38 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:18.302 12:09:05 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@41 -- # sleep 1 00:23:19.238 12:09:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@43 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:19.497 [2024-07-25 12:09:06.583474] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583522] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583531] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583537] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583543] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583550] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583555] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583561] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583567] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583572] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583578] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583584] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583589] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583595] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583600] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583607] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583612] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583618] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583630] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583635] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583641] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583646] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583652] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583658] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583663] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583669] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583684] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583690] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583696] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583702] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583708] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583713] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583722] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583751] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583773] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 [2024-07-25 12:09:06.583791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2008f50 is same with the state(5) to be set 00:23:19.497 12:09:06 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@45 -- # sleep 3 00:23:22.788 12:09:09 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@47 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:22.788 00:23:22.788 12:09:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@48 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:23.049 [2024-07-25 12:09:10.187104] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187151] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187159] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187170] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187176] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187182] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187187] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187193] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187199] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187205] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187210] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187215] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187221] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187227] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187232] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187238] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187243] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187249] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187254] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187260] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187266] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187272] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187277] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187283] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187288] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187294] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187299] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187309] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187315] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187321] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187328] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187334] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187340] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187345] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187352] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187358] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187363] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187369] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187375] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187381] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187386] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187392] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187398] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187403] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187409] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187415] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187420] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187427] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187432] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187437] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187443] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187449] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187454] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187460] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187466] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187472] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187478] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187488] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187494] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187500] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187505] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187511] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187517] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187523] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187528] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187534] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187540] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.049 [2024-07-25 12:09:10.187546] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187551] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187557] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187563] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187568] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187574] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187580] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187586] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187591] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187597] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187602] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187608] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187613] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187619] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187624] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187631] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187637] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187642] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187650] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187655] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187661] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187667] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187672] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187678] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187683] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187689] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187694] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187699] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187705] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187710] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187716] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187721] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187727] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187733] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187739] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187745] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187750] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187756] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187762] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187768] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187774] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187779] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187785] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187791] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187796] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187803] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187809] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187815] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187820] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187826] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187832] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187837] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187843] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187848] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187854] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187859] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 [2024-07-25 12:09:10.187865] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x2009d70 is same with the state(5) to be set 00:23:23.050 12:09:10 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@50 -- # sleep 3 00:23:26.440 12:09:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@53 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:23:26.440 [2024-07-25 12:09:13.386589] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:26.440 12:09:13 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@55 -- # sleep 1 00:23:27.376 12:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:27.376 [2024-07-25 12:09:14.589057] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3b40 is same with the state(5) to be set 00:23:27.376 [2024-07-25 12:09:14.589099] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3b40 is same with the state(5) to be set 00:23:27.376 [2024-07-25 12:09:14.589107] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3b40 is same with the state(5) to be set 00:23:27.376 [2024-07-25 12:09:14.589114] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3b40 is same with the state(5) to be set 00:23:27.376 [2024-07-25 12:09:14.589120] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3b40 is same with the state(5) to be set 00:23:27.376 [2024-07-25 12:09:14.589126] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3b40 is same with the state(5) to be set 00:23:27.376 [2024-07-25 12:09:14.589132] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3b40 is same with the state(5) to be set 00:23:27.376 [2024-07-25 12:09:14.589138] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3b40 is same with the state(5) to be set 00:23:27.376 [2024-07-25 12:09:14.589144] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3b40 is same with the state(5) to be set 00:23:27.376 [2024-07-25 12:09:14.589149] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3b40 is same with the state(5) to be set 00:23:27.376 [2024-07-25 12:09:14.589161] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3b40 is same with the state(5) to be set 00:23:27.376 [2024-07-25 12:09:14.589167] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3b40 is same with the state(5) to be set 00:23:27.376 [2024-07-25 12:09:14.589173] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3b40 is same with the state(5) to be set 00:23:27.376 [2024-07-25 12:09:14.589179] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3b40 is same with the state(5) to be set 00:23:27.376 [2024-07-25 12:09:14.589184] tcp.c:1653:nvmf_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x21c3b40 is same with the state(5) to be set 00:23:27.376 12:09:14 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@59 -- # wait 422326 00:23:33.949 0 00:23:33.949 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@61 -- # killprocess 422097 00:23:33.949 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 422097 ']' 00:23:33.949 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 422097 00:23:33.949 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:33.949 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:33.949 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 422097 00:23:33.949 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:33.949 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:33.949 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 422097' 00:23:33.949 killing process with pid 422097 00:23:33.949 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 422097 00:23:33.949 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 422097 00:23:33.949 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@63 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:33.949 [2024-07-25 12:09:04.428007] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:23:33.949 [2024-07-25 12:09:04.428062] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid422097 ] 00:23:33.949 EAL: No free 2048 kB hugepages reported on node 1 00:23:33.949 [2024-07-25 12:09:04.482725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:33.949 [2024-07-25 12:09:04.557890] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:33.950 Running I/O for 15 seconds... 00:23:33.950 [2024-07-25 12:09:06.585068] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.950 [2024-07-25 12:09:06.585102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585112] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.950 [2024-07-25 12:09:06.585120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585128] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.950 [2024-07-25 12:09:06.585135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585142] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.950 [2024-07-25 12:09:06.585148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585155] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1a45540 is same with the state(5) to be set 00:23:33.950 [2024-07-25 12:09:06.585211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:92224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:92232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:92240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:92248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:92256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:92264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:92272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:92280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:91840 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:91848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:91856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585383] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:91864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:91872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:91880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:91888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:91896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:91904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:91912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:91920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585499] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:91928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:91936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:91944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:91952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:91960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:92288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585585] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:92296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585599] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:92304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585613] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:92312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:92320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:92328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:92336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:92344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:92352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:92360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585708] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:92368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:92376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:92384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:92392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:92400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:92408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:91968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:91976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:91984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:91992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:92000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:92008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:92016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:92024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.585909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:92416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585923] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:92424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:92432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:92440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585973] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:92448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.585987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:92456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.585993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.586001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:92464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.586007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.586016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:92472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.950 [2024-07-25 12:09:06.586022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.586030] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:92032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.586036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.586051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:92040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.586058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.586066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:92048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.586073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.586081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:92056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.586087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.586099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:92064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.586105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.586113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:92072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.586119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.586127] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:92080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.586133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.586141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:92088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.586147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.586155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:92096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.586161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.586169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:92104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.586175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.586183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:92112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.586189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.586197] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:92120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.586203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.586211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:92128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.586217] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.586225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:92136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.950 [2024-07-25 12:09:06.586232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.950 [2024-07-25 12:09:06.586242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:92144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:06.586248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:92152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:06.586263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:92160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:06.586277] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:92168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:06.586291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:92176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:06.586305] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:92184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:06.586319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:92192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:06.586335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:92200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:06.586349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:92208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:06.586363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586371] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:92216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:06.586378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:92480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:92488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:92496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:92504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586436] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:92512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:92520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:92536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:92544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586506] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:92552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:92560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:92568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:92576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:92584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586586] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:92592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586594] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586601] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:92600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586608] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:92608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:92616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586644] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:92624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:92632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:92640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:92648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586695] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:92656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:92664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:92672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586737] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:92680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:92688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:92696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:92704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:92712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:92720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:92728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586846] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:92736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:92744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:92752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:92760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:92768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:92776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:92784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:92792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586950] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:92800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:92808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.586988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:92816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.586995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.587003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:92824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.587009] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.587017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:92832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.587024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.587032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:92840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.587038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.587050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:92848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.951 [2024-07-25 12:09:06.587056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.587073] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.951 [2024-07-25 12:09:06.587079] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.951 [2024-07-25 12:09:06.587085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:92856 len:8 PRP1 0x0 PRP2 0x0 00:23:33.951 [2024-07-25 12:09:06.587091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:06.587132] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a38510 was disconnected and freed. reset controller. 00:23:33.951 [2024-07-25 12:09:06.587141] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:33.951 [2024-07-25 12:09:06.587154] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.951 [2024-07-25 12:09:06.590004] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.951 [2024-07-25 12:09:06.590033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a45540 (9): Bad file descriptor 00:23:33.951 [2024-07-25 12:09:06.712786] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:33.951 [2024-07-25 12:09:10.189468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:55848 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:10.189503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:10.189518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:55856 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:10.189529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:10.189538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:55864 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:10.189544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:10.189553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:55872 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:10.189559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:10.189567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:55880 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:10.189573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:10.189581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:55888 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:10.189588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:10.189596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:55896 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:10.189602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:10.189610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:55904 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:10.189616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:10.189624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:55912 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:10.189631] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:10.189639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:55920 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:10.189647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:10.189655] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:55928 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:10.189662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:10.189670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:55936 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:10.189676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:10.189684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:55944 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:10.189691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:10.189699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:55952 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:10.189705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.951 [2024-07-25 12:09:10.189715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:55960 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.951 [2024-07-25 12:09:10.189721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:55968 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.952 [2024-07-25 12:09:10.189736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:55976 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.952 [2024-07-25 12:09:10.189751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:55984 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.952 [2024-07-25 12:09:10.189765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189773] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:55992 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.952 [2024-07-25 12:09:10.189780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:56000 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.952 [2024-07-25 12:09:10.189794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:56008 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.952 [2024-07-25 12:09:10.189808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:56016 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.952 [2024-07-25 12:09:10.189822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:56024 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.952 [2024-07-25 12:09:10.189837] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:56032 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.952 [2024-07-25 12:09:10.189851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189858] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:56040 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.952 [2024-07-25 12:09:10.189867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189875] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:56048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.952 [2024-07-25 12:09:10.189881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189890] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:56056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.952 [2024-07-25 12:09:10.189897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:56064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.952 [2024-07-25 12:09:10.189911] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:56072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.952 [2024-07-25 12:09:10.189925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189933] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:56080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.952 [2024-07-25 12:09:10.189940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:56088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.952 [2024-07-25 12:09:10.189953] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:56096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.952 [2024-07-25 12:09:10.189967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189975] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:56104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.952 [2024-07-25 12:09:10.189983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.189991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:56168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.189998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:56176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:56184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:56192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190040] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:56200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:56208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:56216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:56224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:56232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:56240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:56248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:56256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190173] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:56264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:56272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:56280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:56288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:56296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:56304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:56312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:56328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:56336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190319] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:56344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:56352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:56360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:56368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:56376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:56384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:56392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:56400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:56408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:56416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:56424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:56432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:56440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:56448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190516] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:56456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190530] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:56464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:56472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:56480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190572] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190580] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:56488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190594] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:56496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:56504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190624] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:56512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190630] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:56520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:56528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:56536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:56544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.952 [2024-07-25 12:09:10.190689] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190712] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.952 [2024-07-25 12:09:10.190718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56552 len:8 PRP1 0x0 PRP2 0x0 00:23:33.952 [2024-07-25 12:09:10.190725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.952 [2024-07-25 12:09:10.190735] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.190740] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.190746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56560 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.190752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.190759] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.190764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.190769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56568 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.190776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.190782] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.190787] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.190792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56576 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.190799] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.190805] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.190810] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.190815] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56584 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.190821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.190828] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.190833] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.190838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56592 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.190846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.190853] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.190857] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.190863] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56600 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.190869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.190875] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.190881] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.190887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56608 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.190893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.190905] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.190910] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.190915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56616 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.190921] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.190928] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.190933] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.190938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56624 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.190944] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.190951] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.190957] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.190963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56632 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.190970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.190977] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.190984] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.190989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56640 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.190995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.191002] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.191007] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.191012] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56648 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.191018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.191025] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.191031] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.191037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56656 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.191047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.191054] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.191058] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.191064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56664 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.191070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.191077] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.191082] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.191088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56672 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.191094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.191102] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.191108] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.191113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56680 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.191119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.191125] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.191130] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.191136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56688 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.191142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.191149] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.191154] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.191159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56696 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.191165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.191172] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.191177] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.191182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56704 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.191189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.191195] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.191201] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.191206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56712 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.191212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.191219] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.191224] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.191230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56720 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.191237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.191243] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.191248] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.191253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56728 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.191259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.191266] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.191271] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.191276] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56736 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.191283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.191291] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.191297] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.191302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56744 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.191308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.191315] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.191320] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.191325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56752 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.191332] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.191338] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.191343] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.191349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56760 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.191355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.191363] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.191368] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.191374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56768 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.191380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.191387] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.191392] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.191397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56776 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.191405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.191411] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.191416] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.202702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56784 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.202716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.202726] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.202733] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.202740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56792 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.202748] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.202757] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.202764] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.202771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56800 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.202779] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.202789] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.202796] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.202803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56808 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.202811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.202820] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.202827] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.202834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56816 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.202842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.202851] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.202858] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.202865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56824 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.202873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.202883] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.202890] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.202897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56832 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.202905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.202914] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.202920] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.202932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56840 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.202940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.202949] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.202956] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.202963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56848 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.202971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.202980] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.202987] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.202994] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56856 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.203002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.203011] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.203018] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.203025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:56864 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.203033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.203046] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.203053] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.203061] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56112 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.203069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.203078] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.203084] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.203091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56120 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.203100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.203108] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.203115] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.203122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56128 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.203131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.203139] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.203146] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.203154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56136 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.203162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.203171] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.203179] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.203186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56144 len:8 PRP1 0x0 PRP2 0x0 00:23:33.953 [2024-07-25 12:09:10.203195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.953 [2024-07-25 12:09:10.203204] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.953 [2024-07-25 12:09:10.203210] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.953 [2024-07-25 12:09:10.203217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56152 len:8 PRP1 0x0 PRP2 0x0 00:23:33.954 [2024-07-25 12:09:10.203226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:10.203234] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.954 [2024-07-25 12:09:10.203241] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.954 [2024-07-25 12:09:10.203248] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:56160 len:8 PRP1 0x0 PRP2 0x0 00:23:33.954 [2024-07-25 12:09:10.203256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:10.203302] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a693f0 was disconnected and freed. reset controller. 00:23:33.954 [2024-07-25 12:09:10.203314] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4421 to 10.0.0.2:4422 00:23:33.954 [2024-07-25 12:09:10.203339] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.954 [2024-07-25 12:09:10.203349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:10.203359] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.954 [2024-07-25 12:09:10.203368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:10.203377] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.954 [2024-07-25 12:09:10.203386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:10.203395] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.954 [2024-07-25 12:09:10.203404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:10.203413] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.954 [2024-07-25 12:09:10.203451] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a45540 (9): Bad file descriptor 00:23:33.954 [2024-07-25 12:09:10.207328] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.954 [2024-07-25 12:09:10.288458] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:33.954 [2024-07-25 12:09:14.589847] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:80832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.589881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.589896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:80840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.589907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.589916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:80848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.589922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.589930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:80856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.589937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.589945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:80864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.589952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.589959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:80872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.589966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.589974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:80880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.589980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.589988] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:80888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.589995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590003] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:80896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:80904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:80912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:80920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:80928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:80936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:80944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:80952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:80960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:80968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:80976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:80984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:80992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:81000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:81008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:81016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:81024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:81032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:81040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:81048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:81056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:81064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:81072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:81080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:80448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.954 [2024-07-25 12:09:14.590363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:80456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.954 [2024-07-25 12:09:14.590378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:80464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.954 [2024-07-25 12:09:14.590392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:80472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.954 [2024-07-25 12:09:14.590407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:80480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.954 [2024-07-25 12:09:14.590421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590429] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:80488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.954 [2024-07-25 12:09:14.590435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590443] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:80496 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.954 [2024-07-25 12:09:14.590449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:80504 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.954 [2024-07-25 12:09:14.590464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:80512 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.954 [2024-07-25 12:09:14.590479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590487] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:80520 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.954 [2024-07-25 12:09:14.590493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:80528 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.954 [2024-07-25 12:09:14.590507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:80536 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.954 [2024-07-25 12:09:14.590522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:80544 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.954 [2024-07-25 12:09:14.590537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590545] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:80552 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.954 [2024-07-25 12:09:14.590551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:80560 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.954 [2024-07-25 12:09:14.590565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:80568 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.954 [2024-07-25 12:09:14.590580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:81088 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:81096 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:81104 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:81112 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:81120 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590654] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:81128 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:81136 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:81144 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:81152 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:81160 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590727] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.954 [2024-07-25 12:09:14.590735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:81168 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.954 [2024-07-25 12:09:14.590741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.590748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:81176 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.590755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.590762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:81184 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.590768] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.590776] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:81192 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.590783] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.590791] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:81200 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.590798] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.590806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:81208 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.590812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.590820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:80576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.590828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.590836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:80584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.590842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.590850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:80592 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.590857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.590864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:80600 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.590871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.590879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:80608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.590885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.590892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:80616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.590899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.590907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:80624 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.590913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.590921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:80632 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.590927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.590935] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:80640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.590941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.590949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:80648 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.590955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.590963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:80656 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.590969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.590977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:80664 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.590983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.590991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:80672 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.590997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:80680 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:80688 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:80696 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591046] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:80704 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591069] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:80712 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:80720 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:80728 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:80736 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:80744 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:80752 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:80760 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:81216 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:81224 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:81232 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:81240 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:81248 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:81256 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:81264 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:81272 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:81280 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81288 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:81296 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:81304 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:81312 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:81320 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:81328 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:81336 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:80768 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591414] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:80776 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591420] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591428] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:80784 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:80792 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:80800 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:80808 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:80816 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:80824 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:23:33.955 [2024-07-25 12:09:14.591507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:81344 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:81352 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:81360 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:81368 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:81376 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:81384 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591604] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:81392 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:81400 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591633] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:81408 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:81416 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:81424 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591667] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:81432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:81440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591704] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:81448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:81456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:23:33.955 [2024-07-25 12:09:14.591724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591742] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:23:33.955 [2024-07-25 12:09:14.591748] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:23:33.955 [2024-07-25 12:09:14.591755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:81464 len:8 PRP1 0x0 PRP2 0x0 00:23:33.955 [2024-07-25 12:09:14.591763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591806] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1a690b0 was disconnected and freed. reset controller. 00:23:33.955 [2024-07-25 12:09:14.591814] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4422 to 10.0.0.2:4420 00:23:33.955 [2024-07-25 12:09:14.591833] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.955 [2024-07-25 12:09:14.591840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591848] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.955 [2024-07-25 12:09:14.591855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591862] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.955 [2024-07-25 12:09:14.591868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.955 [2024-07-25 12:09:14.591875] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:33.955 [2024-07-25 12:09:14.591881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:33.956 [2024-07-25 12:09:14.591888] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:33.956 [2024-07-25 12:09:14.594720] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:33.956 [2024-07-25 12:09:14.594748] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1a45540 (9): Bad file descriptor 00:23:33.956 [2024-07-25 12:09:14.717827] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:33.956 00:23:33.956 Latency(us) 00:23:33.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.956 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:33.956 Verification LBA range: start 0x0 length 0x4000 00:23:33.956 NVMe0n1 : 15.01 10862.51 42.43 1006.05 0.00 10762.29 1524.42 27354.16 00:23:33.956 =================================================================================================================== 00:23:33.956 Total : 10862.51 42.43 1006.05 0.00 10762.29 1524.42 27354.16 00:23:33.956 Received shutdown signal, test time was about 15.000000 seconds 00:23:33.956 00:23:33.956 Latency(us) 00:23:33.956 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:33.956 =================================================================================================================== 00:23:33.956 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:23:33.956 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # grep -c 'Resetting controller successful' 00:23:33.956 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@65 -- # count=3 00:23:33.956 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@67 -- # (( count != 3 )) 00:23:33.956 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@73 -- # bdevperf_pid=425236 00:23:33.956 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@72 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 1 -f 00:23:33.956 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@75 -- # waitforlisten 425236 /var/tmp/bdevperf.sock 00:23:33.956 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@829 -- # '[' -z 425236 ']' 00:23:33.956 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:23:33.956 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:33.956 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:23:33.956 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:23:33.956 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:33.956 12:09:20 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:34.524 12:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:34.524 12:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@862 -- # return 0 00:23:34.524 12:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:23:34.784 [2024-07-25 12:09:21.812909] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:34.784 12:09:21 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@77 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4422 00:23:34.784 [2024-07-25 12:09:22.009504] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4422 *** 00:23:35.043 12:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@78 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:35.302 NVMe0n1 00:23:35.302 12:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:35.562 00:23:35.562 12:09:22 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@80 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:35.821 00:23:35.821 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:35.821 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@82 -- # grep -q NVMe0 00:23:36.080 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@84 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:36.339 12:09:23 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@87 -- # sleep 3 00:23:39.630 12:09:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:39.630 12:09:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@88 -- # grep -q NVMe0 00:23:39.630 12:09:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@90 -- # run_test_pid=426166 00:23:39.630 12:09:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@89 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bdevperf.sock perform_tests 00:23:39.630 12:09:26 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@92 -- # wait 426166 00:23:40.569 0 00:23:40.569 12:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@94 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:40.569 [2024-07-25 12:09:20.838802] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:23:40.569 [2024-07-25 12:09:20.838852] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid425236 ] 00:23:40.569 EAL: No free 2048 kB hugepages reported on node 1 00:23:40.569 [2024-07-25 12:09:20.893911] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:40.569 [2024-07-25 12:09:20.963122] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:40.569 [2024-07-25 12:09:23.361097] bdev_nvme.c:1870:bdev_nvme_failover_trid: *NOTICE*: Start failover from 10.0.0.2:4420 to 10.0.0.2:4421 00:23:40.569 [2024-07-25 12:09:23.361145] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.569 [2024-07-25 12:09:23.361155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.569 [2024-07-25 12:09:23.361164] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.569 [2024-07-25 12:09:23.361171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.569 [2024-07-25 12:09:23.361177] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.569 [2024-07-25 12:09:23.361184] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.569 [2024-07-25 12:09:23.361191] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:40.569 [2024-07-25 12:09:23.361198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:40.569 [2024-07-25 12:09:23.361205] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:23:40.569 [2024-07-25 12:09:23.361229] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:23:40.569 [2024-07-25 12:09:23.361243] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xda8540 (9): Bad file descriptor 00:23:40.570 [2024-07-25 12:09:23.454311] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:23:40.570 Running I/O for 1 seconds... 00:23:40.570 00:23:40.570 Latency(us) 00:23:40.570 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:23:40.570 Job: NVMe0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:23:40.570 Verification LBA range: start 0x0 length 0x4000 00:23:40.570 NVMe0n1 : 1.01 10799.90 42.19 0.00 0.00 11805.10 2493.22 29633.67 00:23:40.570 =================================================================================================================== 00:23:40.570 Total : 10799.90 42.19 0.00 0.00 11805.10 2493.22 29633.67 00:23:40.570 12:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:40.570 12:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@95 -- # grep -q NVMe0 00:23:40.829 12:09:27 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@98 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4422 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:40.829 12:09:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:40.829 12:09:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@99 -- # grep -q NVMe0 00:23:41.089 12:09:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@100 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_detach_controller NVMe0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 00:23:41.348 12:09:28 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@101 -- # sleep 3 00:23:44.640 12:09:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_controllers 00:23:44.640 12:09:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@103 -- # grep -q NVMe0 00:23:44.640 12:09:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@108 -- # killprocess 425236 00:23:44.640 12:09:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 425236 ']' 00:23:44.640 12:09:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 425236 00:23:44.640 12:09:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:44.640 12:09:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:44.640 12:09:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 425236 00:23:44.640 12:09:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:23:44.640 12:09:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:23:44.640 12:09:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 425236' 00:23:44.640 killing process with pid 425236 00:23:44.640 12:09:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 425236 00:23:44.640 12:09:31 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 425236 00:23:44.640 12:09:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@110 -- # sync 00:23:44.640 12:09:31 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@111 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@113 -- # trap - SIGINT SIGTERM EXIT 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@115 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- host/failover.sh@116 -- # nvmftestfini 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@488 -- # nvmfcleanup 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@117 -- # sync 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@120 -- # set +e 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@121 -- # for i in {1..20} 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:23:44.900 rmmod nvme_tcp 00:23:44.900 rmmod nvme_fabrics 00:23:44.900 rmmod nvme_keyring 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@124 -- # set -e 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@125 -- # return 0 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@489 -- # '[' -n 421731 ']' 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@490 -- # killprocess 421731 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@948 -- # '[' -z 421731 ']' 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@952 -- # kill -0 421731 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # uname 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:23:44.900 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 421731 00:23:45.159 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:23:45.159 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:23:45.159 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@966 -- # echo 'killing process with pid 421731' 00:23:45.159 killing process with pid 421731 00:23:45.159 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@967 -- # kill 421731 00:23:45.159 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@972 -- # wait 421731 00:23:45.159 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:23:45.159 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:23:45.159 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:23:45.160 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:23:45.160 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@278 -- # remove_spdk_ns 00:23:45.160 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:45.160 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:45.160 12:09:32 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:23:47.732 00:23:47.732 real 0m37.654s 00:23:47.732 user 2m1.453s 00:23:47.732 sys 0m7.380s 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@1124 -- # xtrace_disable 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_failover -- common/autotest_common.sh@10 -- # set +x 00:23:47.732 ************************************ 00:23:47.732 END TEST nvmf_failover 00:23:47.732 ************************************ 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@26 -- # run_test nvmf_host_discovery /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:23:47.732 ************************************ 00:23:47.732 START TEST nvmf_host_discovery 00:23:47.732 ************************************ 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery.sh --transport=tcp 00:23:47.732 * Looking for test storage... 00:23:47.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # uname -s 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@5 -- # export PATH 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@47 -- # : 0 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@51 -- # have_pci_nics=0 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@11 -- # '[' tcp == rdma ']' 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@16 -- # DISCOVERY_PORT=8009 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@17 -- # DISCOVERY_NQN=nqn.2014-08.org.nvmexpress.discovery 00:23:47.732 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@20 -- # NQN=nqn.2016-06.io.spdk:cnode 00:23:47.733 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@22 -- # HOST_NQN=nqn.2021-12.io.spdk:test 00:23:47.733 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@23 -- # HOST_SOCK=/tmp/host.sock 00:23:47.733 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@25 -- # nvmftestinit 00:23:47.733 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:23:47.733 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:23:47.733 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@448 -- # prepare_net_devs 00:23:47.733 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@410 -- # local -g is_hw=no 00:23:47.733 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@412 -- # remove_spdk_ns 00:23:47.733 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:23:47.733 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:23:47.733 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:23:47.733 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:23:47.733 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:23:47.733 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@285 -- # xtrace_disable 00:23:47.733 12:09:34 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # pci_devs=() 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@291 -- # local -a pci_devs 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # pci_net_devs=() 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # pci_drivers=() 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@293 -- # local -A pci_drivers 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # net_devs=() 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@295 -- # local -ga net_devs 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # e810=() 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@296 -- # local -ga e810 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # x722=() 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@297 -- # local -ga x722 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # mlx=() 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@298 -- # local -ga mlx 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:23:53.021 Found 0000:86:00.0 (0x8086 - 0x159b) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:23:53.021 Found 0000:86:00.1 (0x8086 - 0x159b) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:23:53.021 Found net devices under 0000:86:00.0: cvl_0_0 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@390 -- # [[ up == up ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:23:53.021 Found net devices under 0000:86:00.1: cvl_0_1 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@414 -- # is_hw=yes 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:23:53.021 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:23:53.022 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:23:53.022 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.184 ms 00:23:53.022 00:23:53.022 --- 10.0.0.2 ping statistics --- 00:23:53.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.022 rtt min/avg/max/mdev = 0.184/0.184/0.184/0.000 ms 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:23:53.022 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:23:53.022 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.402 ms 00:23:53.022 00:23:53.022 --- 10.0.0.1 ping statistics --- 00:23:53.022 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:23:53.022 rtt min/avg/max/mdev = 0.402/0.402/0.402/0.000 ms 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@422 -- # return 0 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@30 -- # nvmfappstart -m 0x2 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@722 -- # xtrace_disable 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@481 -- # nvmfpid=430389 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@482 -- # waitforlisten 430389 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 430389 ']' 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:23:53.022 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:53.022 12:09:39 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.022 [2024-07-25 12:09:39.932829] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:23:53.022 [2024-07-25 12:09:39.932868] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:23:53.022 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.022 [2024-07-25 12:09:39.990160] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.022 [2024-07-25 12:09:40.083503] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:23:53.022 [2024-07-25 12:09:40.083537] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:23:53.022 [2024-07-25 12:09:40.083545] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:23:53.022 [2024-07-25 12:09:40.083551] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:23:53.022 [2024-07-25 12:09:40.083556] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:23:53.022 [2024-07-25 12:09:40.083572] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@728 -- # xtrace_disable 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@32 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.591 [2024-07-25 12:09:40.790190] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@33 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2014-08.org.nvmexpress.discovery -t tcp -a 10.0.0.2 -s 8009 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.591 [2024-07-25 12:09:40.798377] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@35 -- # rpc_cmd bdev_null_create null0 1000 512 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.591 null0 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@36 -- # rpc_cmd bdev_null_create null1 1000 512 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.591 null1 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@37 -- # rpc_cmd bdev_wait_for_examine 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@45 -- # hostpid=430631 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@46 -- # waitforlisten 430631 /tmp/host.sock 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@829 -- # '[' -z 430631 ']' 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@834 -- # local max_retries=100 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:23:53.591 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@838 -- # xtrace_disable 00:23:53.591 12:09:40 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:53.850 [2024-07-25 12:09:40.874198] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:23:53.850 [2024-07-25 12:09:40.874239] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid430631 ] 00:23:53.850 EAL: No free 2048 kB hugepages reported on node 1 00:23:53.850 [2024-07-25 12:09:40.927287] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:53.850 [2024-07-25 12:09:41.000095] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:23:53.850 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:23:53.850 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@862 -- # return 0 00:23:53.850 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@48 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; kill $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:23:53.850 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@50 -- # rpc_cmd -s /tmp/host.sock log_set_flag bdev_nvme 00:23:53.850 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:53.850 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@51 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@72 -- # notify_id=0 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # get_subsystem_names 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@83 -- # [[ '' == '' ]] 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # get_bdev_list 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@84 -- # [[ '' == '' ]] 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@86 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # get_subsystem_names 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@87 -- # [[ '' == '' ]] 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # get_bdev_list 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@88 -- # [[ '' == '' ]] 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@90 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null0 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # get_subsystem_names 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.111 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.371 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@91 -- # [[ '' == '' ]] 00:23:54.371 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # get_bdev_list 00:23:54.371 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.371 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@92 -- # [[ '' == '' ]] 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@96 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.372 [2024-07-25 12:09:41.431987] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # get_subsystem_names 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@97 -- # [[ '' == '' ]] 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # get_bdev_list 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@98 -- # [[ '' == '' ]] 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@99 -- # is_notification_count_eq 0 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=0 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@103 -- # rpc_cmd nvmf_subsystem_add_host nqn.2016-06.io.spdk:cnode0 nqn.2021-12.io.spdk:test 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@105 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:54.372 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:54.632 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == \n\v\m\e\0 ]] 00:23:54.632 12:09:41 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:54.934 [2024-07-25 12:09:42.127575] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:54.934 [2024-07-25 12:09:42.127594] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:54.934 [2024-07-25 12:09:42.127608] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:55.193 [2024-07-25 12:09:42.214879] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:23:55.193 [2024-07-25 12:09:42.402603] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:55.193 [2024-07-25 12:09:42.402622] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:55.452 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:55.452 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:55.452 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:55.452 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:55.452 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:55.452 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.452 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:55.452 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.452 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:55.452 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.712 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.712 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:55.712 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@106 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:55.712 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1" ]]' 00:23:55.712 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:55.712 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:55.712 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1"' ']]' 00:23:55.712 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:55.712 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:55.712 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.712 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:55.712 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.712 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.712 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:55.712 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.712 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 == \n\v\m\e\0\n\1 ]] 00:23:55.712 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@107 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT" ]]' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT"' ']]' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0 ]] 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@108 -- # is_notification_count_eq 1 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 0 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=1 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@111 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 null1 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@113 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@114 -- # is_notification_count_eq 1 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=1 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 1 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=1 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@118 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4421 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.713 [2024-07-25 12:09:42.948086] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:23:55.713 [2024-07-25 12:09:42.948783] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:55.713 [2024-07-25 12:09:42.948804] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@120 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.713 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:55.973 12:09:42 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.973 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:55.973 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:55.973 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@121 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:55.973 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:55.973 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:55.973 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:55.973 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:55.973 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:55.973 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:55.973 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:55.973 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@122 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_PORT $NVMF_SECOND_PORT" ]]' 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:55.974 [2024-07-25 12:09:43.078200] bdev_nvme.c:6935:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new path for nvme0 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 == \4\4\2\0\ \4\4\2\1 ]] 00:23:55.974 12:09:43 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@918 -- # sleep 1 00:23:55.974 [2024-07-25 12:09:43.147013] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:55.974 [2024-07-25 12:09:43.147028] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:23:55.974 [2024-07-25 12:09:43.147033] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_PORT' '$NVMF_SECOND_PORT"' ']]' 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4420 4421 == \4\4\2\0\ \4\4\2\1 ]] 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@123 -- # is_notification_count_eq 0 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:56.913 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@127 -- # rpc_cmd nvmf_subsystem_remove_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.174 [2024-07-25 12:09:44.199924] bdev_nvme.c:6993:discovery_aer_cb: *INFO*: Discovery[10.0.0.2:8009] got aer 00:23:57.174 [2024-07-25 12:09:44.199946] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.174 [2024-07-25 12:09:44.204623] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.174 [2024-07-25 12:09:44.204641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@129 -- # waitforcondition '[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:57.174 [2024-07-25 12:09:44.204650] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.174 [2024-07-25 12:09:44.204660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-07-25 12:09:44.204668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.174 [2024-07-25 12:09:44.204674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-07-25 12:09:44.204682] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:23:57.174 [2024-07-25 12:09:44.204688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:23:57.174 [2024-07-25 12:09:44.204695] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831f30 is same with the state(5) to be set 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "nvme0" ]]' 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '"nvme0"' ']]' 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:57.174 [2024-07-25 12:09:44.214634] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x831f30 (9): Bad file descriptor 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.174 [2024-07-25 12:09:44.224671] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:57.174 [2024-07-25 12:09:44.225248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.174 [2024-07-25 12:09:44.225265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x831f30 with addr=10.0.0.2, port=4420 00:23:57.174 [2024-07-25 12:09:44.225273] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831f30 is same with the state(5) to be set 00:23:57.174 [2024-07-25 12:09:44.225285] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x831f30 (9): Bad file descriptor 00:23:57.174 [2024-07-25 12:09:44.225302] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:57.174 [2024-07-25 12:09:44.225310] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:57.174 [2024-07-25 12:09:44.225318] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:57.174 [2024-07-25 12:09:44.225329] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.174 [2024-07-25 12:09:44.234727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:57.174 [2024-07-25 12:09:44.235279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.174 [2024-07-25 12:09:44.235292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x831f30 with addr=10.0.0.2, port=4420 00:23:57.174 [2024-07-25 12:09:44.235299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831f30 is same with the state(5) to be set 00:23:57.174 [2024-07-25 12:09:44.235315] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x831f30 (9): Bad file descriptor 00:23:57.174 [2024-07-25 12:09:44.235325] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:57.174 [2024-07-25 12:09:44.235332] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:57.174 [2024-07-25 12:09:44.235338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:57.174 [2024-07-25 12:09:44.235353] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.174 [2024-07-25 12:09:44.244776] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:57.174 [2024-07-25 12:09:44.245137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.174 [2024-07-25 12:09:44.245150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x831f30 with addr=10.0.0.2, port=4420 00:23:57.174 [2024-07-25 12:09:44.245157] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831f30 is same with the state(5) to be set 00:23:57.174 [2024-07-25 12:09:44.245168] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x831f30 (9): Bad file descriptor 00:23:57.174 [2024-07-25 12:09:44.245178] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:57.174 [2024-07-25 12:09:44.245187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:57.174 [2024-07-25 12:09:44.245194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:57.174 [2024-07-25 12:09:44.245203] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@130 -- # waitforcondition '[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:57.174 [2024-07-25 12:09:44.254828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "nvme0n1 nvme0n2" ]]' 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:57.174 [2024-07-25 12:09:44.255288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.174 [2024-07-25 12:09:44.255303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x831f30 with addr=10.0.0.2, port=4420 00:23:57.174 [2024-07-25 12:09:44.255310] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831f30 is same with the state(5) to be set 00:23:57.174 [2024-07-25 12:09:44.255320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x831f30 (9): Bad file descriptor 00:23:57.174 [2024-07-25 12:09:44.255330] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:57.174 [2024-07-25 12:09:44.255336] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:57.174 [2024-07-25 12:09:44.255342] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:57.174 [2024-07-25 12:09:44.255351] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '"nvme0n1' 'nvme0n2"' ']]' 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.174 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:57.175 [2024-07-25 12:09:44.264883] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:57.175 [2024-07-25 12:09:44.265309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.175 [2024-07-25 12:09:44.265323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x831f30 with addr=10.0.0.2, port=4420 00:23:57.175 [2024-07-25 12:09:44.265330] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831f30 is same with the state(5) to be set 00:23:57.175 [2024-07-25 12:09:44.265341] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x831f30 (9): Bad file descriptor 00:23:57.175 [2024-07-25 12:09:44.265350] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:57.175 [2024-07-25 12:09:44.265356] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:57.175 [2024-07-25 12:09:44.265367] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:57.175 [2024-07-25 12:09:44.265376] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.175 [2024-07-25 12:09:44.274935] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:57.175 [2024-07-25 12:09:44.275380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.175 [2024-07-25 12:09:44.275393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x831f30 with addr=10.0.0.2, port=4420 00:23:57.175 [2024-07-25 12:09:44.275400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831f30 is same with the state(5) to be set 00:23:57.175 [2024-07-25 12:09:44.275410] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x831f30 (9): Bad file descriptor 00:23:57.175 [2024-07-25 12:09:44.275420] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:57.175 [2024-07-25 12:09:44.275427] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:57.175 [2024-07-25 12:09:44.275433] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:57.175 [2024-07-25 12:09:44.275442] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.175 [2024-07-25 12:09:44.284986] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:23:57.175 [2024-07-25 12:09:44.285379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:57.175 [2024-07-25 12:09:44.285391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x831f30 with addr=10.0.0.2, port=4420 00:23:57.175 [2024-07-25 12:09:44.285398] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x831f30 is same with the state(5) to be set 00:23:57.175 [2024-07-25 12:09:44.285408] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x831f30 (9): Bad file descriptor 00:23:57.175 [2024-07-25 12:09:44.285417] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:23:57.175 [2024-07-25 12:09:44.285423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:23:57.175 [2024-07-25 12:09:44.285429] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:23:57.175 [2024-07-25 12:09:44.285438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:23:57.175 [2024-07-25 12:09:44.287563] bdev_nvme.c:6798:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 not found 00:23:57.175 [2024-07-25 12:09:44.287577] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@131 -- # waitforcondition '[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_paths nvme0)" == "$NVMF_SECOND_PORT" ]]' 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_paths' 'nvme0)"' == '"$NVMF_SECOND_PORT"' ']]' 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_paths nvme0 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # jq -r '.[].ctrlrs[].trid.trsvcid' 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers -n nvme0 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # sort -n 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@63 -- # xargs 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ 4421 == \4\4\2\1 ]] 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@132 -- # is_notification_count_eq 0 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=0 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=0 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=2 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@134 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_stop_discovery -b nvme 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@136 -- # waitforcondition '[[ "$(get_subsystem_names)" == "" ]]' 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_subsystem_names)" == "" ]]' 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_subsystem_names)"' == '""' ']]' 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_subsystem_names 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_controllers 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # jq -r '.[].name' 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # sort 00:23:57.175 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@59 -- # xargs 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@137 -- # waitforcondition '[[ "$(get_bdev_list)" == "" ]]' 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=[[ "$(get_bdev_list)" == "" ]]' 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval '[[' '"$(get_bdev_list)"' == '""' ']]' 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_bdev_list 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # [[ '' == '' ]] 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@138 -- # is_notification_count_eq 2 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@79 -- # expected_count=2 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@80 -- # waitforcondition 'get_notification_count && ((notification_count == expected_count))' 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@912 -- # local 'cond=get_notification_count && ((notification_count == expected_count))' 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@913 -- # local max=10 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@914 -- # (( max-- )) 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # eval get_notification_count '&&' '((notification_count' == 'expected_count))' 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # get_notification_count 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # rpc_cmd -s /tmp/host.sock notify_get_notifications -i 2 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # jq '. | length' 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@74 -- # notification_count=2 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@75 -- # notify_id=4 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@915 -- # (( notification_count == expected_count )) 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@916 -- # return 0 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@141 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:57.435 12:09:44 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.372 [2024-07-25 12:09:45.618263] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:23:58.372 [2024-07-25 12:09:45.618280] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:23:58.372 [2024-07-25 12:09:45.618293] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:23:58.632 [2024-07-25 12:09:45.707566] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 new subsystem nvme0 00:23:58.632 [2024-07-25 12:09:45.812572] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:23:58.632 [2024-07-25 12:09:45.812598] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4421 found again 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@143 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.632 request: 00:23:58.632 { 00:23:58.632 "name": "nvme", 00:23:58.632 "trtype": "tcp", 00:23:58.632 "traddr": "10.0.0.2", 00:23:58.632 "adrfam": "ipv4", 00:23:58.632 "trsvcid": "8009", 00:23:58.632 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:58.632 "wait_for_attach": true, 00:23:58.632 "method": "bdev_nvme_start_discovery", 00:23:58.632 "req_id": 1 00:23:58.632 } 00:23:58.632 Got JSON-RPC error response 00:23:58.632 response: 00:23:58.632 { 00:23:58.632 "code": -17, 00:23:58.632 "message": "File exists" 00:23:58.632 } 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # get_discovery_ctrlrs 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@145 -- # [[ nvme == \n\v\m\e ]] 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # get_bdev_list 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:58.632 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@146 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@149 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test -w 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.892 request: 00:23:58.892 { 00:23:58.892 "name": "nvme_second", 00:23:58.892 "trtype": "tcp", 00:23:58.892 "traddr": "10.0.0.2", 00:23:58.892 "adrfam": "ipv4", 00:23:58.892 "trsvcid": "8009", 00:23:58.892 "hostnqn": "nqn.2021-12.io.spdk:test", 00:23:58.892 "wait_for_attach": true, 00:23:58.892 "method": "bdev_nvme_start_discovery", 00:23:58.892 "req_id": 1 00:23:58.892 } 00:23:58.892 Got JSON-RPC error response 00:23:58.892 response: 00:23:58.892 { 00:23:58.892 "code": -17, 00:23:58.892 "message": "File exists" 00:23:58.892 } 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # get_discovery_ctrlrs 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@151 -- # [[ nvme == \n\v\m\e ]] 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # get_bdev_list 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # jq -r '.[].name' 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # sort 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:58.892 12:09:45 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@55 -- # xargs 00:23:58.892 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:23:58.892 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@152 -- # [[ nvme0n1 nvme0n2 == \n\v\m\e\0\n\1\ \n\v\m\e\0\n\2 ]] 00:23:58.892 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@155 -- # NOT rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:58.892 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@648 -- # local es=0 00:23:58.892 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:58.892 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:23:58.892 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:58.892 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:23:58.892 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:23:58.892 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme_second -t tcp -a 10.0.0.2 -s 8010 -f ipv4 -q nqn.2021-12.io.spdk:test -T 3000 00:23:58.892 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:23:58.892 12:09:46 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:23:59.829 [2024-07-25 12:09:47.049607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:23:59.829 [2024-07-25 12:09:47.049633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa08250 with addr=10.0.0.2, port=8010 00:23:59.829 [2024-07-25 12:09:47.049646] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:23:59.829 [2024-07-25 12:09:47.049653] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:23:59.829 [2024-07-25 12:09:47.049659] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:01.206 [2024-07-25 12:09:48.052066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:24:01.206 [2024-07-25 12:09:48.052089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa08250 with addr=10.0.0.2, port=8010 00:24:01.206 [2024-07-25 12:09:48.052100] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:24:01.206 [2024-07-25 12:09:48.052106] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:24:01.206 [2024-07-25 12:09:48.052129] bdev_nvme.c:7073:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] could not start discovery connect 00:24:02.144 [2024-07-25 12:09:49.053960] bdev_nvme.c:7054:discovery_poller: *ERROR*: Discovery[10.0.0.2:8010] timed out while attaching discovery ctrlr 00:24:02.144 request: 00:24:02.144 { 00:24:02.144 "name": "nvme_second", 00:24:02.144 "trtype": "tcp", 00:24:02.144 "traddr": "10.0.0.2", 00:24:02.144 "adrfam": "ipv4", 00:24:02.144 "trsvcid": "8010", 00:24:02.144 "hostnqn": "nqn.2021-12.io.spdk:test", 00:24:02.144 "wait_for_attach": false, 00:24:02.144 "attach_timeout_ms": 3000, 00:24:02.144 "method": "bdev_nvme_start_discovery", 00:24:02.144 "req_id": 1 00:24:02.144 } 00:24:02.144 Got JSON-RPC error response 00:24:02.144 response: 00:24:02.144 { 00:24:02.144 "code": -110, 00:24:02.144 "message": "Connection timed out" 00:24:02.144 } 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@651 -- # es=1 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # get_discovery_ctrlrs 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_get_discovery_info 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # jq -r '.[].name' 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # sort 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@67 -- # xargs 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@157 -- # [[ nvme == \n\v\m\e ]] 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@159 -- # trap - SIGINT SIGTERM EXIT 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@161 -- # kill 430631 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- host/discovery.sh@162 -- # nvmftestfini 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@117 -- # sync 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@120 -- # set +e 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:02.144 rmmod nvme_tcp 00:24:02.144 rmmod nvme_fabrics 00:24:02.144 rmmod nvme_keyring 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@124 -- # set -e 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@125 -- # return 0 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@489 -- # '[' -n 430389 ']' 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@490 -- # killprocess 430389 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@948 -- # '[' -z 430389 ']' 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@952 -- # kill -0 430389 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # uname 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 430389 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@966 -- # echo 'killing process with pid 430389' 00:24:02.144 killing process with pid 430389 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@967 -- # kill 430389 00:24:02.144 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@972 -- # wait 430389 00:24:02.403 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:02.403 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:02.403 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:02.403 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:02.403 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:02.403 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:02.403 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:02.403 12:09:49 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.312 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:04.312 00:24:04.312 real 0m16.944s 00:24:04.312 user 0m20.925s 00:24:04.312 sys 0m5.211s 00:24:04.312 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:04.312 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_discovery -- common/autotest_common.sh@10 -- # set +x 00:24:04.312 ************************************ 00:24:04.312 END TEST nvmf_host_discovery 00:24:04.312 ************************************ 00:24:04.312 12:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:24:04.312 12:09:51 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@27 -- # run_test nvmf_host_multipath_status /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:04.312 12:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:04.312 12:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:04.312 12:09:51 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:04.312 ************************************ 00:24:04.312 START TEST nvmf_host_multipath_status 00:24:04.312 ************************************ 00:24:04.312 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/multipath_status.sh --transport=tcp 00:24:04.571 * Looking for test storage... 00:24:04.571 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:04.571 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:04.571 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # uname -s 00:24:04.571 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:04.571 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:04.571 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:04.571 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:04.571 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:04.571 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:04.571 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:04.571 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:04.571 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:04.571 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:04.571 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:04.571 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@5 -- # export PATH 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@47 -- # : 0 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@12 -- # MALLOC_BDEV_SIZE=64 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@13 -- # MALLOC_BLOCK_SIZE=512 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@15 -- # rpc_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@16 -- # bpf_sh=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/bpftrace.sh 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@18 -- # bdevperf_rpc_sock=/var/tmp/bdevperf.sock 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@21 -- # NQN=nqn.2016-06.io.spdk:cnode1 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@31 -- # nvmftestinit 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@285 -- # xtrace_disable 00:24:04.572 12:09:51 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # pci_devs=() 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # net_devs=() 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # e810=() 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@296 -- # local -ga e810 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # x722=() 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@297 -- # local -ga x722 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # mlx=() 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@298 -- # local -ga mlx 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:09.890 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:09.890 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:09.890 Found net devices under 0000:86:00.0: cvl_0_0 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:09.890 Found net devices under 0000:86:00.1: cvl_0_1 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@414 -- # is_hw=yes 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:09.890 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:09.891 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:09.891 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.176 ms 00:24:09.891 00:24:09.891 --- 10.0.0.2 ping statistics --- 00:24:09.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.891 rtt min/avg/max/mdev = 0.176/0.176/0.176/0.000 ms 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:09.891 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:09.891 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.222 ms 00:24:09.891 00:24:09.891 --- 10.0.0.1 ping statistics --- 00:24:09.891 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:09.891 rtt min/avg/max/mdev = 0.222/0.222/0.222/0.000 ms 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@422 -- # return 0 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@33 -- # nvmfappstart -m 0x3 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@481 -- # nvmfpid=435486 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@482 -- # waitforlisten 435486 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x3 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 435486 ']' 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:09.891 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:09.891 12:09:56 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:09.891 [2024-07-25 12:09:56.840460] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:24:09.891 [2024-07-25 12:09:56.840511] [ DPDK EAL parameters: nvmf -c 0x3 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:09.891 EAL: No free 2048 kB hugepages reported on node 1 00:24:09.891 [2024-07-25 12:09:56.898950] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:24:09.891 [2024-07-25 12:09:56.973038] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:09.891 [2024-07-25 12:09:56.973100] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:09.891 [2024-07-25 12:09:56.973107] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:09.891 [2024-07-25 12:09:56.973113] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:09.891 [2024-07-25 12:09:56.973118] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:09.891 [2024-07-25 12:09:56.973163] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:09.891 [2024-07-25 12:09:56.973166] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:10.459 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:10.459 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:10.459 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:10.459 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:10.459 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:10.459 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:10.459 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@34 -- # nvmfapp_pid=435486 00:24:10.459 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@36 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_transport -t tcp -o -u 8192 00:24:10.718 [2024-07-25 12:09:57.829915] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:10.718 12:09:57 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@37 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py bdev_malloc_create 64 512 -b Malloc0 00:24:10.979 Malloc0 00:24:10.979 12:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@39 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -r -m 2 00:24:10.979 12:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:24:11.238 12:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@41 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:24:11.498 [2024-07-25 12:09:58.520401] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:11.498 12:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@42 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 00:24:11.498 [2024-07-25 12:09:58.700918] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4421 *** 00:24:11.498 12:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@45 -- # bdevperf_pid=435817 00:24:11.498 12:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 0x4 -z -r /var/tmp/bdevperf.sock -q 128 -o 4096 -w verify -t 90 00:24:11.498 12:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@47 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:24:11.498 12:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@48 -- # waitforlisten 435817 /var/tmp/bdevperf.sock 00:24:11.498 12:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@829 -- # '[' -z 435817 ']' 00:24:11.498 12:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bdevperf.sock 00:24:11.498 12:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:11.498 12:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock...' 00:24:11.498 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bdevperf.sock... 00:24:11.498 12:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:11.498 12:09:58 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:12.435 12:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:12.436 12:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@862 -- # return 0 00:24:12.436 12:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@52 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_options -r -1 00:24:12.695 12:09:59 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@55 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -l -1 -o 10 00:24:12.955 Nvme0n1 00:24:13.214 12:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@56 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_attach_controller -b Nvme0 -t tcp -a 10.0.0.2 -s 4421 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -x multipath -l -1 -o 10 00:24:13.473 Nvme0n1 00:24:13.473 12:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@78 -- # sleep 2 00:24:13.473 12:10:00 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@76 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -t 120 -s /var/tmp/bdevperf.sock perform_tests 00:24:15.378 12:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@90 -- # set_ANA_state optimized optimized 00:24:15.378 12:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:15.637 12:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:15.897 12:10:02 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@91 -- # sleep 1 00:24:16.835 12:10:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@92 -- # check_status true false true true true true 00:24:16.835 12:10:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:16.835 12:10:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:16.835 12:10:03 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:17.095 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.095 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:17.095 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.095 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:17.095 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:17.095 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:17.095 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:17.095 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.355 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.355 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:17.355 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.355 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:17.614 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.614 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:17.614 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.614 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:17.614 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.614 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:17.614 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:17.614 12:10:04 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:17.874 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:17.874 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@94 -- # set_ANA_state non_optimized optimized 00:24:17.874 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:18.133 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:18.392 12:10:05 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@95 -- # sleep 1 00:24:19.328 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@96 -- # check_status false true true true true true 00:24:19.328 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:19.328 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.328 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:19.588 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:19.588 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:19.588 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.588 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:19.588 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.588 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:19.588 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:19.588 12:10:06 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.847 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:19.847 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:19.847 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:19.847 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:20.107 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.107 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:20.107 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.107 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:20.367 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.367 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:20.367 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:20.367 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:20.367 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:20.367 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@100 -- # set_ANA_state non_optimized non_optimized 00:24:20.367 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:20.626 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:20.885 12:10:07 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@101 -- # sleep 1 00:24:21.959 12:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@102 -- # check_status true false true true true true 00:24:21.959 12:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:21.959 12:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.959 12:10:08 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:21.959 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:21.959 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:21.959 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:21.959 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:22.218 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:22.218 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:22.218 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.218 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:22.476 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.476 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:22.476 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.476 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:22.476 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.476 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:22.476 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.476 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:22.735 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.735 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:22.735 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:22.735 12:10:09 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:22.993 12:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:22.993 12:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@104 -- # set_ANA_state non_optimized inaccessible 00:24:22.993 12:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:23.251 12:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:23.252 12:10:10 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@105 -- # sleep 1 00:24:24.628 12:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@106 -- # check_status true false true true true false 00:24:24.628 12:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:24.628 12:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.628 12:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:24.628 12:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.628 12:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:24.628 12:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.628 12:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:24.628 12:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:24.628 12:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:24.628 12:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.628 12:10:11 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:24.887 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:24.887 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:24.887 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:24.887 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:25.146 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.146 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:25.146 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.146 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:25.146 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:25.146 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:25.405 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:25.405 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:25.405 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:25.405 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@108 -- # set_ANA_state inaccessible inaccessible 00:24:25.405 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:25.664 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:25.922 12:10:12 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@109 -- # sleep 1 00:24:26.859 12:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@110 -- # check_status false false true true false false 00:24:26.859 12:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:26.859 12:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:26.859 12:10:13 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:27.118 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.118 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:27.118 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.118 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:27.118 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.118 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:27.118 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.118 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:27.377 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.377 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:27.377 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.377 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:27.635 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:27.635 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:27.635 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:27.635 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.894 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.894 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:27.894 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:27.894 12:10:14 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:27.894 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:27.894 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@112 -- # set_ANA_state inaccessible optimized 00:24:27.894 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n inaccessible 00:24:28.153 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:28.411 12:10:15 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@113 -- # sleep 1 00:24:29.349 12:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@114 -- # check_status false true true true false true 00:24:29.349 12:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:29.349 12:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.349 12:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:29.607 12:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:29.608 12:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:29.608 12:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:29.608 12:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.608 12:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.608 12:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:29.608 12:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.608 12:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:29.867 12:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:29.867 12:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:29.867 12:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:29.867 12:10:16 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:30.126 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.126 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible false 00:24:30.126 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.126 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:30.126 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:30.126 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:30.126 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:30.126 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:30.385 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:30.385 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@116 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_set_multipath_policy -b Nvme0n1 -p active_active 00:24:30.644 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@119 -- # set_ANA_state optimized optimized 00:24:30.644 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n optimized 00:24:30.902 12:10:17 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:30.902 12:10:18 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@120 -- # sleep 1 00:24:32.281 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@121 -- # check_status true true true true true true 00:24:32.281 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:32.281 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.281 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:32.281 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.281 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:32.281 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.281 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:32.538 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.539 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:32.539 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.539 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:32.539 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.539 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:32.539 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.539 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:32.797 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:32.797 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:32.797 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:32.797 12:10:19 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:33.056 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.056 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:33.056 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:33.056 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:33.316 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:33.316 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@123 -- # set_ANA_state non_optimized optimized 00:24:33.316 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:33.316 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n optimized 00:24:33.576 12:10:20 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@124 -- # sleep 1 00:24:34.515 12:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@125 -- # check_status false true true true true true 00:24:34.515 12:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current false 00:24:34.515 12:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.515 12:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:34.774 12:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:34.774 12:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:34.774 12:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:34.774 12:10:21 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:35.034 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.034 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:35.034 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.034 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:35.034 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.034 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:35.034 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.034 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:35.294 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.294 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:35.294 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.294 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:35.554 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.554 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:35.554 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:35.554 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:35.814 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:35.814 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@129 -- # set_ANA_state non_optimized non_optimized 00:24:35.814 12:10:22 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:35.814 12:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n non_optimized 00:24:36.073 12:10:23 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@130 -- # sleep 1 00:24:37.010 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@131 -- # check_status true true true true true true 00:24:37.010 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:37.010 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.010 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:37.270 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.270 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current true 00:24:37.270 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.270 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:37.529 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.529 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:37.529 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.529 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:37.529 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.529 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:37.529 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:37.529 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.788 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:37.788 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:37.788 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:37.788 12:10:24 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:38.047 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.047 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible true 00:24:38.048 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:38.048 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:38.307 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:38.307 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@133 -- # set_ANA_state non_optimized inaccessible 00:24:38.307 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@59 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 -n non_optimized 00:24:38.307 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@60 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_subsystem_listener_set_ana_state nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4421 -n inaccessible 00:24:38.566 12:10:25 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@134 -- # sleep 1 00:24:39.506 12:10:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@135 -- # check_status true false true true true false 00:24:39.506 12:10:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@68 -- # port_status 4420 current true 00:24:39.506 12:10:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.506 12:10:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").current' 00:24:39.765 12:10:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:39.765 12:10:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@69 -- # port_status 4421 current false 00:24:39.765 12:10:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:39.765 12:10:26 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").current' 00:24:40.026 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:40.026 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@70 -- # port_status 4420 connected true 00:24:40.026 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.026 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").connected' 00:24:40.286 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.286 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@71 -- # port_status 4421 connected true 00:24:40.286 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.286 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").connected' 00:24:40.286 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.286 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@72 -- # port_status 4420 accessible true 00:24:40.286 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.286 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4420").accessible' 00:24:40.545 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ true == \t\r\u\e ]] 00:24:40.545 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@73 -- # port_status 4421 accessible false 00:24:40.545 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # jq -r '.poll_groups[].io_paths[] | select (.transport.trsvcid=="4421").accessible' 00:24:40.545 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bdevperf.sock bdev_nvme_get_io_paths 00:24:40.805 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@64 -- # [[ false == \f\a\l\s\e ]] 00:24:40.805 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@137 -- # killprocess 435817 00:24:40.805 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 435817 ']' 00:24:40.805 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 435817 00:24:40.805 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:40.805 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:40.805 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 435817 00:24:40.805 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_2 00:24:40.805 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_2 = sudo ']' 00:24:40.805 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 435817' 00:24:40.805 killing process with pid 435817 00:24:40.805 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 435817 00:24:40.805 12:10:27 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 435817 00:24:40.805 Connection closed with partial response: 00:24:40.805 00:24:40.805 00:24:41.086 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@139 -- # wait 435817 00:24:41.086 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@141 -- # cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:41.086 [2024-07-25 12:09:58.763265] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:24:41.086 [2024-07-25 12:09:58.763318] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x4 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid435817 ] 00:24:41.086 EAL: No free 2048 kB hugepages reported on node 1 00:24:41.086 [2024-07-25 12:09:58.814840] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:41.087 [2024-07-25 12:09:58.888447] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:24:41.087 Running I/O for 90 seconds... 00:24:41.087 [2024-07-25 12:10:12.757051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.087 [2024-07-25 12:10:12.757091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.087 [2024-07-25 12:10:12.757122] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.087 [2024-07-25 12:10:12.757143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.087 [2024-07-25 12:10:12.757163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.087 [2024-07-25 12:10:12.757182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757194] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.087 [2024-07-25 12:10:12.757201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.087 [2024-07-25 12:10:12.757223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.087 [2024-07-25 12:10:12.757245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757260] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.757268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.757287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.757312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.757333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.757355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757367] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.757374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.757394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.757414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.757433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.757455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.757476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.757496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.757515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.757535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757547] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.757554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.757574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757798] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.087 [2024-07-25 12:10:12.757807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.087 [2024-07-25 12:10:12.757827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757842] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.087 [2024-07-25 12:10:12.757851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.087 [2024-07-25 12:10:12.757873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.087 [2024-07-25 12:10:12.757892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.087 [2024-07-25 12:10:12.757913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.087 [2024-07-25 12:10:12.757934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.087 [2024-07-25 12:10:12.757955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.757970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.757977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.758306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.758320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.758334] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.758341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.758357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.758364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.758377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.758384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:41.087 [2024-07-25 12:10:12.758397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.087 [2024-07-25 12:10:12.758403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-07-25 12:10:12.758423] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-07-25 12:10:12.758445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.758465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.758484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.758503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.758522] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758534] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.758541] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.758559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.758578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.758602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758614] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-07-25 12:10:12.758621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-07-25 12:10:12.758641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758653] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-07-25 12:10:12.758660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-07-25 12:10:12.758679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-07-25 12:10:12.758699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-07-25 12:10:12.758717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-07-25 12:10:12.758736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.088 [2024-07-25 12:10:12.758757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758955] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.758965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758979] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.758986] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.758999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.759006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.759019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.759029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.759041] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.759054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.759067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.759074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.759086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.759095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.759108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.759116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.759129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.759137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.759150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.759157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.759171] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.759179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.759191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.759199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.759211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.759218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.759231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.759239] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.759251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.088 [2024-07-25 12:10:12.759260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:41.088 [2024-07-25 12:10:12.759273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759283] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759581] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759589] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759622] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.759989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.759996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.760009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.760016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.760028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.760037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.760054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.760061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.760074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.089 [2024-07-25 12:10:12.760081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.760096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-07-25 12:10:12.760103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.760126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-07-25 12:10:12.760134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.760146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-07-25 12:10:12.760153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.760165] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-07-25 12:10:12.760172] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.760185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-07-25 12:10:12.760192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.760204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-07-25 12:10:12.760211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.760223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-07-25 12:10:12.760230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.760244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.089 [2024-07-25 12:10:12.760251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:41.089 [2024-07-25 12:10:12.760263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.760271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.760290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.760310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.760329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.760351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.760370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.760393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.760413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-07-25 12:10:12.760433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760449] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-07-25 12:10:12.760457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-07-25 12:10:12.760476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-07-25 12:10:12.760496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-07-25 12:10:12.760519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-07-25 12:10:12.760538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-07-25 12:10:12.760560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760572] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-07-25 12:10:12.760581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.760839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.760859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.760879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.760898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.760917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.760936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.760955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.760974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.760986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.760993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.761006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.761014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.761027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.761034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.761051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.761058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.761070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.761077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.761091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.761098] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.761110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.761117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.761130] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.090 [2024-07-25 12:10:12.761137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.761149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-07-25 12:10:12.761155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.761168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-07-25 12:10:12.761174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.761187] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-07-25 12:10:12.761194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.761206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.090 [2024-07-25 12:10:12.761213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:41.090 [2024-07-25 12:10:12.761225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761447] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-07-25 12:10:12.761466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-07-25 12:10:12.761486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-07-25 12:10:12.761505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-07-25 12:10:12.761524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-07-25 12:10:12.761543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-07-25 12:10:12.761563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-07-25 12:10:12.761583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-07-25 12:10:12.761602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.761758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.091 [2024-07-25 12:10:12.761765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:41.091 [2024-07-25 12:10:12.762145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.091 [2024-07-25 12:10:12.762155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.762179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.762198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.762218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.762237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.762257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.762276] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.762296] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.092 [2024-07-25 12:10:12.762315] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762329] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.092 [2024-07-25 12:10:12.762340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.092 [2024-07-25 12:10:12.762359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.092 [2024-07-25 12:10:12.762379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.092 [2024-07-25 12:10:12.762398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762411] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.092 [2024-07-25 12:10:12.762418] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762432] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.092 [2024-07-25 12:10:12.762439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.092 [2024-07-25 12:10:12.762458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.762697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.762719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.762739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.762752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.762759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.773609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.773619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.773632] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.773639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.773652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.773659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.773671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.773677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.773690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.773696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.773709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.773716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.773731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.773738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.773750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.773757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.773769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.773776] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.773788] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.773795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:41.092 [2024-07-25 12:10:12.773807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.092 [2024-07-25 12:10:12.773814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.773826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.773833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.773845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.773852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774058] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774134] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774154] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774175] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774285] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.093 [2024-07-25 12:10:12.774521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.093 [2024-07-25 12:10:12.774540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.093 [2024-07-25 12:10:12.774559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.093 [2024-07-25 12:10:12.774578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.093 [2024-07-25 12:10:12.774598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.093 [2024-07-25 12:10:12.774616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.093 [2024-07-25 12:10:12.774636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.093 [2024-07-25 12:10:12.774659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.093 [2024-07-25 12:10:12.774717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:41.093 [2024-07-25 12:10:12.774729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.774736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.774749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.774756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.774768] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.774775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.774787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.774795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.774807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.774814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.774826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.094 [2024-07-25 12:10:12.774832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.774845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.094 [2024-07-25 12:10:12.774852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.774864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.094 [2024-07-25 12:10:12.774871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.774885] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.094 [2024-07-25 12:10:12.774892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.774904] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.094 [2024-07-25 12:10:12.774912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.774924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.094 [2024-07-25 12:10:12.774931] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.774943] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.094 [2024-07-25 12:10:12.774951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.774963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.094 [2024-07-25 12:10:12.774970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.774982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.774989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.775008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.775027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.775051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.775070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775083] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.775090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.775108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.775129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.775148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.775168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.775186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.775206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775218] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.775225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.775244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.775263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.094 [2024-07-25 12:10:12.775282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.094 [2024-07-25 12:10:12.775301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.094 [2024-07-25 12:10:12.775321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.094 [2024-07-25 12:10:12.775340] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775352] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.094 [2024-07-25 12:10:12.775360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775373] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.094 [2024-07-25 12:10:12.775380] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.094 [2024-07-25 12:10:12.775399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.094 [2024-07-25 12:10:12.775419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.094 [2024-07-25 12:10:12.775438] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.094 [2024-07-25 12:10:12.775457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:41.094 [2024-07-25 12:10:12.775473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.094 [2024-07-25 12:10:12.775480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.775499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.775519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.775538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.775557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.775578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.775597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775610] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-07-25 12:10:12.775617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-07-25 12:10:12.775638] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775651] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-07-25 12:10:12.775658] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-07-25 12:10:12.775677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-07-25 12:10:12.775697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-07-25 12:10:12.775716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775728] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-07-25 12:10:12.775735] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-07-25 12:10:12.775754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.775774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.775795] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.775814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.775833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.775857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.775877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.775896] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.775915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775928] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-07-25 12:10:12.775935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-07-25 12:10:12.775954] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775967] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-07-25 12:10:12.775973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.775986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-07-25 12:10:12.775992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.776005] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-07-25 12:10:12.776012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.776024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-07-25 12:10:12.776031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.776046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-07-25 12:10:12.776054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.776066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.095 [2024-07-25 12:10:12.776074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.776086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.776094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.776110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.776117] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.776129] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.776136] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.776149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.776156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.095 [2024-07-25 12:10:12.776168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.095 [2024-07-25 12:10:12.776176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.776188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.096 [2024-07-25 12:10:12.776195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.776208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.096 [2024-07-25 12:10:12.776214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.777072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.096 [2024-07-25 12:10:12.777088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.777103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.777110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.777123] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.777130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.777143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.777150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.777162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.777169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.777184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.777194] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.777206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.777213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.777225] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.777232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.777244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.777251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.777264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.777270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.777283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.777290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.777302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.777309] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.777322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.777328] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.777341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.777347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.777360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.777367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.777379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.777386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.777398] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.777405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.778159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.778169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.778185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.778192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.778204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.778211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.778223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.778230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.778244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.778251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.778263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.778269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.778282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.778288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.778301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.778308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.778320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.778326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.778339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.778346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.778358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.778364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.778377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.778383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.096 [2024-07-25 12:10:12.778395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.096 [2024-07-25 12:10:12.778402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.778422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.778441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.778459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.778479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.778650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.778670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778682] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.778688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778702] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.778709] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778721] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.778728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.778747] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.778766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778778] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-07-25 12:10:12.778784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-07-25 12:10:12.778805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-07-25 12:10:12.778824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-07-25 12:10:12.778843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-07-25 12:10:12.778862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-07-25 12:10:12.778881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778893] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-07-25 12:10:12.778899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-07-25 12:10:12.778918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.778937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.778956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.778974] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.778987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.778993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.779007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.779013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.779025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.779034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.785786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.785797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.786033] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.097 [2024-07-25 12:10:12.786049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.786063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-07-25 12:10:12.786071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.786084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-07-25 12:10:12.786091] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.786104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-07-25 12:10:12.786111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.786125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-07-25 12:10:12.786132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.786144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-07-25 12:10:12.786151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.786164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-07-25 12:10:12.786170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.786183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-07-25 12:10:12.786190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:41.097 [2024-07-25 12:10:12.786203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.097 [2024-07-25 12:10:12.786211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786223] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786242] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786329] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786341] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786367] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786461] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786468] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786480] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786501] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786507] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-07-25 12:10:12.786546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-07-25 12:10:12.786564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-07-25 12:10:12.786584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-07-25 12:10:12.786602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-07-25 12:10:12.786621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-07-25 12:10:12.786641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786654] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-07-25 12:10:12.786661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-07-25 12:10:12.786680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-07-25 12:10:12.786699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786711] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-07-25 12:10:12.786718] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786732] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-07-25 12:10:12.786739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-07-25 12:10:12.786759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-07-25 12:10:12.786781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-07-25 12:10:12.786803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-07-25 12:10:12.786824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.098 [2024-07-25 12:10:12.786844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786857] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786863] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786883] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786901] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:41.098 [2024-07-25 12:10:12.786932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.098 [2024-07-25 12:10:12.786939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.786951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.786958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.786970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.786981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.786993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787000] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.099 [2024-07-25 12:10:12.787018] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.099 [2024-07-25 12:10:12.787039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.099 [2024-07-25 12:10:12.787065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.099 [2024-07-25 12:10:12.787085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.099 [2024-07-25 12:10:12.787107] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.099 [2024-07-25 12:10:12.787126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.099 [2024-07-25 12:10:12.787148] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.099 [2024-07-25 12:10:12.787169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787201] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787208] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787250] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.099 [2024-07-25 12:10:12.787346] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.099 [2024-07-25 12:10:12.787365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.099 [2024-07-25 12:10:12.787384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.099 [2024-07-25 12:10:12.787403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.099 [2024-07-25 12:10:12.787422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.099 [2024-07-25 12:10:12.787441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.099 [2024-07-25 12:10:12.787460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.099 [2024-07-25 12:10:12.787481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.099 [2024-07-25 12:10:12.787693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:41.099 [2024-07-25 12:10:12.787707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.787714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.787726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.787733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.787745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.787752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.787764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.787771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.787783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.787790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.787802] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.787810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.787822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.787828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.787841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.787848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.787860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.787867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.787879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.787886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.787898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.787905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.787917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.787924] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.787936] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.787945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.787957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.787964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.787978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.787985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.787997] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.788004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.788016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.788023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.788035] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.788047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.788059] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.788066] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.788078] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.788086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.788098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.788106] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.788920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.788936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.788952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.788959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.788972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.788981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.788993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.789004] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.789017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.789024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.789037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.789049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.789062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.789069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.789081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.789088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.789100] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.100 [2024-07-25 12:10:12.789108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.789121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.100 [2024-07-25 12:10:12.789128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.789140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.100 [2024-07-25 12:10:12.789147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.789159] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.100 [2024-07-25 12:10:12.789166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.789179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.100 [2024-07-25 12:10:12.789185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.789198] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.100 [2024-07-25 12:10:12.789204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.789216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.100 [2024-07-25 12:10:12.789223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.789235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.100 [2024-07-25 12:10:12.789242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.789256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.100 [2024-07-25 12:10:12.789263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:41.100 [2024-07-25 12:10:12.789275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.789282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.789294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.789301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.789313] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.789319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.789331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.789338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.789351] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.789358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790036] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.790051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.790072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.101 [2024-07-25 12:10:12.790092] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.101 [2024-07-25 12:10:12.790111] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.101 [2024-07-25 12:10:12.790131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.101 [2024-07-25 12:10:12.790149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.101 [2024-07-25 12:10:12.790171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.101 [2024-07-25 12:10:12.790191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790203] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.101 [2024-07-25 12:10:12.790209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.101 [2024-07-25 12:10:12.790228] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790240] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.790247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.790266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.790285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.790303] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.790322] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.790463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.790485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.790504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.790525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.790545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790557] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.790564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.790583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.790595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.790602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.791229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.791241] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.791254] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.791263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.791277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.101 [2024-07-25 12:10:12.791285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.791300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.101 [2024-07-25 12:10:12.791307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:41.101 [2024-07-25 12:10:12.791320] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.101 [2024-07-25 12:10:12.791327] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.791347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.791366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.791387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.791406] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791419] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.791425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.791444] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.791463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.791482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.791501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.791519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791532] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.791538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791551] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.791557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.791706] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.791726] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.102 [2024-07-25 12:10:12.791745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.102 [2024-07-25 12:10:12.791765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.102 [2024-07-25 12:10:12.791784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.102 [2024-07-25 12:10:12.791803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.102 [2024-07-25 12:10:12.791823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.102 [2024-07-25 12:10:12.791927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.102 [2024-07-25 12:10:12.791947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.102 [2024-07-25 12:10:12.791967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.791982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.791989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.792002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.792010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.792022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.792031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.792049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.792057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.792070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.792077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.792091] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.792099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.792111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.792119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.792133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.792139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.792151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.102 [2024-07-25 12:10:12.792158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.792172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.102 [2024-07-25 12:10:12.792179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.792191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.102 [2024-07-25 12:10:12.792200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.792213] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.102 [2024-07-25 12:10:12.792220] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.792232] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.102 [2024-07-25 12:10:12.792240] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.792436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.102 [2024-07-25 12:10:12.792446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.792459] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.102 [2024-07-25 12:10:12.792466] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.792478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.102 [2024-07-25 12:10:12.792485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:41.102 [2024-07-25 12:10:12.792497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.102 [2024-07-25 12:10:12.792504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.792516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.103 [2024-07-25 12:10:12.792525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.792539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.103 [2024-07-25 12:10:12.792546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.792558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.103 [2024-07-25 12:10:12.792565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.792577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.103 [2024-07-25 12:10:12.792584] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.792596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.103 [2024-07-25 12:10:12.792602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.792615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.103 [2024-07-25 12:10:12.792621] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.792634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.103 [2024-07-25 12:10:12.792641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.792652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.792659] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.792671] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.792678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.792690] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.792697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.792710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.792716] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.792729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.792736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793198] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793210] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793238] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793271] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793299] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793368] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793426] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793636] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793666] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793704] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793723] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.103 [2024-07-25 12:10:12.793804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:41.103 [2024-07-25 12:10:12.793816] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.793823] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.793836] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.793842] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.793855] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.793862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.793876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.793885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.793898] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.793906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.793919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.793926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.793939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.793945] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.793957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.793964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.793976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.793983] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.793996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.104 [2024-07-25 12:10:12.794002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.104 [2024-07-25 12:10:12.794022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794034] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.104 [2024-07-25 12:10:12.794047] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.104 [2024-07-25 12:10:12.794067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.104 [2024-07-25 12:10:12.794086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.104 [2024-07-25 12:10:12.794105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794117] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.104 [2024-07-25 12:10:12.794124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.104 [2024-07-25 12:10:12.794143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.794613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:88 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.794636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.794656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.794675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.794694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.794713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.794734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.794754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.104 [2024-07-25 12:10:12.794773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.104 [2024-07-25 12:10:12.794792] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794804] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.104 [2024-07-25 12:10:12.794811] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.104 [2024-07-25 12:10:12.794830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.104 [2024-07-25 12:10:12.794849] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.104 [2024-07-25 12:10:12.794868] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.104 [2024-07-25 12:10:12.794887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.794900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.104 [2024-07-25 12:10:12.794909] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.795054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.795064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.795077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.795084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.795097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.795104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.795119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.795126] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.795139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.795145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:41.104 [2024-07-25 12:10:12.795807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.104 [2024-07-25 12:10:12.795817] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.795831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.105 [2024-07-25 12:10:12.795839] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.795852] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.105 [2024-07-25 12:10:12.795859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.795871] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.105 [2024-07-25 12:10:12.795879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.795892] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.105 [2024-07-25 12:10:12.795899] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.795912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.105 [2024-07-25 12:10:12.795920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.795932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.105 [2024-07-25 12:10:12.795941] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.795953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.105 [2024-07-25 12:10:12.795960] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.795974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.105 [2024-07-25 12:10:12.795981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.795993] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.105 [2024-07-25 12:10:12.796002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.105 [2024-07-25 12:10:12.796024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.105 [2024-07-25 12:10:12.796049] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796062] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.105 [2024-07-25 12:10:12.796069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.105 [2024-07-25 12:10:12.796089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.105 [2024-07-25 12:10:12.796108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.105 [2024-07-25 12:10:12.796131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.105 [2024-07-25 12:10:12.796151] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796314] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.105 [2024-07-25 12:10:12.796323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.105 [2024-07-25 12:10:12.796344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.105 [2024-07-25 12:10:12.796364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.105 [2024-07-25 12:10:12.796385] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.105 [2024-07-25 12:10:12.796404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.105 [2024-07-25 12:10:12.796426] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.105 [2024-07-25 12:10:12.796445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796457] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.105 [2024-07-25 12:10:12.796464] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.105 [2024-07-25 12:10:12.796483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.105 [2024-07-25 12:10:12.796504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.105 [2024-07-25 12:10:12.796524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.105 [2024-07-25 12:10:12.796626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.105 [2024-07-25 12:10:12.796646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.105 [2024-07-25 12:10:12.796665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796677] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.105 [2024-07-25 12:10:12.796684] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796696] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.105 [2024-07-25 12:10:12.796703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796715] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.105 [2024-07-25 12:10:12.796722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:41.105 [2024-07-25 12:10:12.796734] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.796742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.796755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.106 [2024-07-25 12:10:12.796761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.796774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.106 [2024-07-25 12:10:12.796781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.796793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.106 [2024-07-25 12:10:12.796800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.796812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.106 [2024-07-25 12:10:12.796819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.796832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.106 [2024-07-25 12:10:12.796838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.796850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.106 [2024-07-25 12:10:12.796857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.796869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.106 [2024-07-25 12:10:12.796876] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.796888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.106 [2024-07-25 12:10:12.796895] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.796907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.796914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.797065] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.797084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.797104] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.797125] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.797273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.797293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.797312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.106 [2024-07-25 12:10:12.797331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.106 [2024-07-25 12:10:12.797351] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.106 [2024-07-25 12:10:12.797370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.106 [2024-07-25 12:10:12.797389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.106 [2024-07-25 12:10:12.797408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797420] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.106 [2024-07-25 12:10:12.797427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797439] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.106 [2024-07-25 12:10:12.797446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.106 [2024-07-25 12:10:12.797465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.797486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797498] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.797505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797517] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.797524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797536] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.797543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.797562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.797986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.797995] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.798008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.798015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.798236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.798246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.798259] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.798266] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.798279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.798286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.798963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.798972] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.798985] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.798992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.799006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.799013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:41.106 [2024-07-25 12:10:12.799026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.106 [2024-07-25 12:10:12.799032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.799048] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.799056] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.799068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.799075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.799087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.799094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.799106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.799113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.799125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.799131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.799144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.799150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.799163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.799169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.799182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.799189] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.799701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.799711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.799724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.799732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.799745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.799754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.799767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.799774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.799786] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.799794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.799807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.799814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.799826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.799833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.799845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.799852] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.799864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.799870] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.799883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.799889] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.800453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.800463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.800476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.800483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.800495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.800503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.800939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.800948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.800960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.800967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.800982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.800989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.801001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.801008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.801020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.801027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.801039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.107 [2024-07-25 12:10:12.801061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.801077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.107 [2024-07-25 12:10:12.801085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.801099] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.107 [2024-07-25 12:10:12.801108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.801121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.107 [2024-07-25 12:10:12.801130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.801144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.107 [2024-07-25 12:10:12.801152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.801166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.107 [2024-07-25 12:10:12.801174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.801186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.107 [2024-07-25 12:10:12.801193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.801206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.107 [2024-07-25 12:10:12.801213] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.801226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.801233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.801247] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.801254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.801267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.801274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.801416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.801425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:41.107 [2024-07-25 12:10:12.801438] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.107 [2024-07-25 12:10:12.801445] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.801456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.801463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.801476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.801482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.801667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.801677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.801692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.801699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.801713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.801721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.801733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:97 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.801742] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.801755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.801762] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.801774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.801782] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.801796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.801804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.801817] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.801824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.801837] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.801844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.801856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.801864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.801878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.801885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.801899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.801906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.801919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.801926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.801939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.801947] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.801959] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.801965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.801980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.801988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.802136] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.802145] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.802591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.802601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.802615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.802624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.802637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.802644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.803050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.803060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.803073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.803080] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.803093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.803100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.803112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.803119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.803131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.108 [2024-07-25 12:10:12.803138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.803150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.803157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.803169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.803176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.803188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.803195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.803207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.803214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.803226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.803233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.803245] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.803257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.803270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.803278] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.803291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.803298] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.803310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.803317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.803330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.803337] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:41.108 [2024-07-25 12:10:12.803350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.108 [2024-07-25 12:10:12.803356] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.803375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.803528] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.803550] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803563] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.803573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.803596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-07-25 12:10:12.803615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-07-25 12:10:12.803634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-07-25 12:10:12.803655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-07-25 12:10:12.803736] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803749] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-07-25 12:10:12.803756] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-07-25 12:10:12.803777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-07-25 12:10:12.803797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-07-25 12:10:12.803816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.803835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.803855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.803874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.803893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.803913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.803925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.803932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804040] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.804054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.804074] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-07-25 12:10:12.804094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804106] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-07-25 12:10:12.804113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-07-25 12:10:12.804132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-07-25 12:10:12.804152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-07-25 12:10:12.804171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-07-25 12:10:12.804191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-07-25 12:10:12.804211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-07-25 12:10:12.804231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.804251] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.804270] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804283] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.804292] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.804312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.804333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.804353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804366] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.804374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.109 [2024-07-25 12:10:12.804394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-07-25 12:10:12.804413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:41.109 [2024-07-25 12:10:12.804570] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.109 [2024-07-25 12:10:12.804578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.804591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.804599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.804611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.804618] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.804630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.804637] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.804649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.804656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.804669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.804678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.804691] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.804699] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.804712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.804719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.804731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.804738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.804750] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.804757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.804770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.804777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.804789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.804796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.804808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.804815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.804828] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.804835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.804848] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.804855] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.804995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.805005] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.805019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.805029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.805046] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.805054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.805068] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.805076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.805089] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.805096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.805108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.805115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.805128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.805135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.805147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.805155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.805243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.805252] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.805265] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.805272] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.805505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.805514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.805528] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.805536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.805658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.805668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.806310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.806321] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.806335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.806342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.806358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.806365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.806377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.806384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.806396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.110 [2024-07-25 12:10:12.806403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:41.110 [2024-07-25 12:10:12.806415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.806422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.806441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.806460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806472] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.806479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.806498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.806517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-07-25 12:10:12.806536] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806549] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-07-25 12:10:12.806555] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:109 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-07-25 12:10:12.806575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806587] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:27 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-07-25 12:10:12.806596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-07-25 12:10:12.806615] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-07-25 12:10:12.806634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-07-25 12:10:12.806796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806809] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-07-25 12:10:12.806816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.806836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806849] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.806856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806868] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.806875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806888] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.806894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.806913] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806925] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.806932] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.806951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.806973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.806987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-07-25 12:10:12.806994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.807006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-07-25 12:10:12.807013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.807025] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-07-25 12:10:12.807034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.807051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-07-25 12:10:12.807058] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.807071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-07-25 12:10:12.807077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.807090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-07-25 12:10:12.807096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.807109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-07-25 12:10:12.807116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.807128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.111 [2024-07-25 12:10:12.807135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.807149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.807156] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.807312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.807323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.807336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.807343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.807355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.807363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.807377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.807384] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.807397] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.807404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.807695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.807705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.807718] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.807725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:41.111 [2024-07-25 12:10:12.807739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:9 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.111 [2024-07-25 12:10:12.807746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.807758] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.112 [2024-07-25 12:10:12.807765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.807777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:113 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.112 [2024-07-25 12:10:12.807784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.807797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.112 [2024-07-25 12:10:12.807804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.807818] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.112 [2024-07-25 12:10:12.807825] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.807838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.112 [2024-07-25 12:10:12.807845] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.807859] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.112 [2024-07-25 12:10:12.807866] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.807879] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.112 [2024-07-25 12:10:12.807886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.807900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.807907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.807919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.807926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.807939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.807946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.807958] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.807965] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.807977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.807984] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.807996] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.808003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.808022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.808183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.808203] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808216] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.808222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808235] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.808242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.808261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.808282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.808304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.808325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.808344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.112 [2024-07-25 12:10:12.808363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.112 [2024-07-25 12:10:12.808382] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808394] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.112 [2024-07-25 12:10:12.808401] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.112 [2024-07-25 12:10:12.808421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.112 [2024-07-25 12:10:12.808439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808452] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.112 [2024-07-25 12:10:12.808458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.112 [2024-07-25 12:10:12.808477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.112 [2024-07-25 12:10:12.808496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808509] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.808518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.808538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.808557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.808576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.808595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.808614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808626] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.112 [2024-07-25 12:10:12.808633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:41.112 [2024-07-25 12:10:12.808645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.113 [2024-07-25 12:10:12.808652] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.808664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:28 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.808671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.808684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.808691] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.808899] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.808908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.808921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.808929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.808941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.808948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.808962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.808969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.808981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.808988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809001] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809007] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809020] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.113 [2024-07-25 12:10:12.809026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.113 [2024-07-25 12:10:12.809051] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.113 [2024-07-25 12:10:12.809070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.113 [2024-07-25 12:10:12.809089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.113 [2024-07-25 12:10:12.809108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809122] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.113 [2024-07-25 12:10:12.809129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.113 [2024-07-25 12:10:12.809150] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809163] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.113 [2024-07-25 12:10:12.809169] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809188] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809209] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809497] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809525] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809544] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809576] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809637] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809643] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809656] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809662] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809675] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809682] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809694] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809719] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.113 [2024-07-25 12:10:12.809892] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:41.113 [2024-07-25 12:10:12.809905] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.809912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.810333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810345] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.810353] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810365] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.810374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810387] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.810395] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810408] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.810415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810427] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.810435] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.810455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.810474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.810493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810505] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.810512] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.810531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.810549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810562] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.810569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.810794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.810815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810830] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.810838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.810859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810872] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.114 [2024-07-25 12:10:12.810879] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.114 [2024-07-25 12:10:12.810898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.114 [2024-07-25 12:10:12.810919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.114 [2024-07-25 12:10:12.810939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.114 [2024-07-25 12:10:12.810958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.114 [2024-07-25 12:10:12.810977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.810989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.114 [2024-07-25 12:10:12.810997] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.811009] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.114 [2024-07-25 12:10:12.811016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.811028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.811036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.811054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.811062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.811076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.811083] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.811095] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.811102] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.811239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.811248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.811261] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.811268] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.811280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.811288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.811301] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.114 [2024-07-25 12:10:12.811310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:41.114 [2024-07-25 12:10:12.811323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.811330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.811349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.811370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811382] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.811389] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811404] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.811411] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.811432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.811454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811468] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.811475] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.115 [2024-07-25 12:10:12.811593] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811607] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.115 [2024-07-25 12:10:12.811614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.115 [2024-07-25 12:10:12.811634] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811647] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.115 [2024-07-25 12:10:12.811653] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.115 [2024-07-25 12:10:12.811673] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.115 [2024-07-25 12:10:12.811694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811707] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.115 [2024-07-25 12:10:12.811714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.115 [2024-07-25 12:10:12.811733] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.115 [2024-07-25 12:10:12.811754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.115 [2024-07-25 12:10:12.811773] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811785] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.115 [2024-07-25 12:10:12.811794] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.115 [2024-07-25 12:10:12.811813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.115 [2024-07-25 12:10:12.811939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811952] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.115 [2024-07-25 12:10:12.811959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.115 [2024-07-25 12:10:12.811979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.811991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.115 [2024-07-25 12:10:12.811998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.812010] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.812017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.812029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.812036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.812053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.812060] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.812074] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.812081] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.812093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.812100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.812112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.812119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.812132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.812139] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.812153] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.812160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.812172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.812180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.812192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.812199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.812211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.812218] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.812231] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.812237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.812250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.812256] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.812395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.812404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:41.115 [2024-07-25 12:10:12.812417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.115 [2024-07-25 12:10:12.812424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812436] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.116 [2024-07-25 12:10:12.812443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812455] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.812462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.812481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:59 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.812500] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:90 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.812521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812533] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.812540] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.812559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.812578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.812597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.116 [2024-07-25 12:10:12.812616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812628] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.116 [2024-07-25 12:10:12.812635] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:53 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.116 [2024-07-25 12:10:12.812655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.116 [2024-07-25 12:10:12.812674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812687] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.116 [2024-07-25 12:10:12.812693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.116 [2024-07-25 12:10:12.812712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.116 [2024-07-25 12:10:12.812731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.116 [2024-07-25 12:10:12.812752] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.812919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812931] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:89 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.812939] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:101 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.812958] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.812977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.812989] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:96 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.812996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.813008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.813015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.813027] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:30 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.813034] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.813051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.813059] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.813071] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:22 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.116 [2024-07-25 12:10:12.813078] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.813090] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:108 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.116 [2024-07-25 12:10:12.813097] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.813109] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.116 [2024-07-25 12:10:12.813116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.813128] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.116 [2024-07-25 12:10:12.813138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.813151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.116 [2024-07-25 12:10:12.813157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.813170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.116 [2024-07-25 12:10:12.813177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.813189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:50 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.116 [2024-07-25 12:10:12.813196] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.813208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.116 [2024-07-25 12:10:12.813215] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.813358] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.813366] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.813379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.813386] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.813399] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.813405] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.813418] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.813424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.813437] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.116 [2024-07-25 12:10:12.813443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:41.116 [2024-07-25 12:10:12.813456] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813463] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813482] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813580] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813675] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813949] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.813968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.813975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.814075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.814095] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.814114] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814126] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.814133] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814145] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.814152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.814171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.814190] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814204] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.814211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:86 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.814325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814338] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:55 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.814345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.814364] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814377] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.814383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.814403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814415] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.814422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814434] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.117 [2024-07-25 12:10:12.814441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.117 [2024-07-25 12:10:12.814460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.117 [2024-07-25 12:10:12.814479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814492] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:45 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.117 [2024-07-25 12:10:12.814498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.117 [2024-07-25 12:10:12.814518] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:41.117 [2024-07-25 12:10:12.814530] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-07-25 12:10:12.814539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.814552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-07-25 12:10:12.814558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.814571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-07-25 12:10:12.814577] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.814590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-07-25 12:10:12.814596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.814609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.814616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.814752] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.814761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.814774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.814781] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.814793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:33 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.814800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.814813] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.814819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.814832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.814838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.814850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.814858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.814870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:63 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.814877] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.814889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-07-25 12:10:12.814898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.814910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-07-25 12:10:12.814917] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.814929] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-07-25 12:10:12.814936] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.814948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-07-25 12:10:12.814956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.814968] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-07-25 12:10:12.814975] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.814987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-07-25 12:10:12.814994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-07-25 12:10:12.815013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815026] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-07-25 12:10:12.815032] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815050] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.815057] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.815204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.815223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815236] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.815243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815255] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.815262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.815285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815297] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.815304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.815323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.815342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.815441] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.815461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.815480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:42 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.815499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815512] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.815519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815531] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.815538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.118 [2024-07-25 12:10:12.815557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-07-25 12:10:12.815576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-07-25 12:10:12.815597] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.118 [2024-07-25 12:10:12.815616] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:41.118 [2024-07-25 12:10:12.815629] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.815636] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.815648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.815655] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.815667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.815674] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.815686] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.815693] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.815705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.815712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.815725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.815731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.815744] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.815750] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.815763] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.815770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.815782] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:115 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.815789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.815801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:87 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.815808] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.815820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.815828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.815841] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.815848] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.815860] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.815867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.815880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.119 [2024-07-25 12:10:12.815886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:61504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.119 [2024-07-25 12:10:12.816099] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:53 cdw0:0 sqhd:0027 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:61512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.119 [2024-07-25 12:10:12.816119] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:92 cdw0:0 sqhd:0028 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816132] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:61520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.119 [2024-07-25 12:10:12.816138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:36 cdw0:0 sqhd:0029 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816150] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:61528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.119 [2024-07-25 12:10:12.816157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:126 cdw0:0 sqhd:002a p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:79 nsid:1 lba:61536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.119 [2024-07-25 12:10:12.816176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:79 cdw0:0 sqhd:002b p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:16 nsid:1 lba:61544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.119 [2024-07-25 12:10:12.816195] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:16 cdw0:0 sqhd:002c p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816208] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:61552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.119 [2024-07-25 12:10:12.816214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:002d p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:61176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.816234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:90 cdw0:0 sqhd:002e p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:61184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.816255] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:59 cdw0:0 sqhd:002f p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816267] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:61192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.816274] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:8 cdw0:0 sqhd:0030 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816286] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:61200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.816293] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:104 cdw0:0 sqhd:0031 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:7 nsid:1 lba:61208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.816312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:7 cdw0:0 sqhd:0032 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:61216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.816331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:37 cdw0:0 sqhd:0033 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816343] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:61224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.816350] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:118 cdw0:0 sqhd:0034 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:61232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.119 [2024-07-25 12:10:12.816369] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:70 cdw0:0 sqhd:0035 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.119 [2024-07-25 12:10:12.816388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0036 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816481] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:61568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.119 [2024-07-25 12:10:12.816491] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:111 cdw0:0 sqhd:0037 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:95 nsid:1 lba:61576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.119 [2024-07-25 12:10:12.816521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:95 cdw0:0 sqhd:0038 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:75 nsid:1 lba:61584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.119 [2024-07-25 12:10:12.816542] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:75 cdw0:0 sqhd:0039 p:0 m:0 dnr:0 00:24:41.119 [2024-07-25 12:10:12.816555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:61592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.119 [2024-07-25 12:10:12.816562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:56 cdw0:0 sqhd:003a p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.816575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:61600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.816582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:108 cdw0:0 sqhd:003b p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.816598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:61608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.816605] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:22 cdw0:0 sqhd:003c p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.816618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:61616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.816625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:52 cdw0:0 sqhd:003d p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.816638] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:61240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-07-25 12:10:12.816645] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:30 cdw0:0 sqhd:003e p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.816658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:78 nsid:1 lba:61248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-07-25 12:10:12.816665] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:78 cdw0:0 sqhd:003f p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.816678] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:61256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-07-25 12:10:12.816685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:96 cdw0:0 sqhd:0040 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.816698] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:48 nsid:1 lba:61264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-07-25 12:10:12.816705] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:48 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.816719] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:61272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-07-25 12:10:12.816725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:101 cdw0:0 sqhd:0042 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.816739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:61280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-07-25 12:10:12.816746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:89 cdw0:0 sqhd:0043 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.816759] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:61288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-07-25 12:10:12.816766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:40 cdw0:0 sqhd:0044 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.816779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:61296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.120 [2024-07-25 12:10:12.816786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:114 cdw0:0 sqhd:0045 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.816800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:61624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.816807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:67 cdw0:0 sqhd:0046 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:61632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817076] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:121 cdw0:0 sqhd:0047 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:61640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:125 cdw0:0 sqhd:0048 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817114] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:61648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:65 cdw0:0 sqhd:0049 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817135] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:85 nsid:1 lba:61656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817142] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:85 cdw0:0 sqhd:004a p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:61664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817164] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:57 cdw0:0 sqhd:004b p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817178] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:61672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817185] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:004c p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:61680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:84 cdw0:0 sqhd:004d p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817220] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:61688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:34 cdw0:0 sqhd:004e p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:61696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817248] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:004f p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:61704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:60 cdw0:0 sqhd:0050 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:102 nsid:1 lba:61712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:102 cdw0:0 sqhd:0051 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:61720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817312] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:124 cdw0:0 sqhd:0052 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:81 nsid:1 lba:61728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817333] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:81 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:61736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:105 cdw0:0 sqhd:0054 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:61744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:103 cdw0:0 sqhd:0055 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817391] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:106 nsid:1 lba:61752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:106 cdw0:0 sqhd:0056 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817412] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:61760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817419] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:5 cdw0:0 sqhd:0057 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:61768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817478] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:20 cdw0:0 sqhd:0058 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:61776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:35 cdw0:0 sqhd:0059 p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817516] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:61784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:11 cdw0:0 sqhd:005a p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:61792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:82 cdw0:0 sqhd:005b p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817561] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:61800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:12 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817583] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:61808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817590] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:0 cdw0:0 sqhd:005d p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:62 nsid:1 lba:61816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817612] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:62 cdw0:0 sqhd:005e p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817627] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:24 nsid:1 lba:61824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:24 cdw0:0 sqhd:005f p:0 m:0 dnr:0 00:24:41.120 [2024-07-25 12:10:12.817649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:61832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.120 [2024-07-25 12:10:12.817657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:26 cdw0:0 sqhd:0060 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.817673] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:61840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.817680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:41 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.817695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:61848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.817702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0062 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.817717] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:61856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.817724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:27 cdw0:0 sqhd:0063 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.817739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:61864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.817746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:109 cdw0:0 sqhd:0064 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.817762] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:61872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.817769] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:119 cdw0:0 sqhd:0065 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.817784] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:31 nsid:1 lba:61880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.817790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:31 cdw0:0 sqhd:0066 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.817806] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:61888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.817814] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:1 cdw0:0 sqhd:0067 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.817829] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:18 nsid:1 lba:61896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.817836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:18 cdw0:0 sqhd:0068 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.817851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:91 nsid:1 lba:61904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.817858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:91 cdw0:0 sqhd:0069 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.817873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:61912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.817880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:29 cdw0:0 sqhd:006a p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.817896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:51 nsid:1 lba:61920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.817903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:006b p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.817918] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:61928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.817925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:45 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.817941] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:61936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.817948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:2 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.817963] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:61304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-07-25 12:10:12.817970] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:88 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.817986] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:61312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-07-25 12:10:12.817993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:13 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.818008] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:61320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-07-25 12:10:12.818015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.818031] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:93 nsid:1 lba:61328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-07-25 12:10:12.818038] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.818057] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:61336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-07-25 12:10:12.818064] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:23 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.818079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:61344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-07-25 12:10:12.818086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:54 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.818101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:61352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-07-25 12:10:12.818108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:55 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.818124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:61360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-07-25 12:10:12.818131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:86 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.818146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:44 nsid:1 lba:61944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.818153] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.818224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:61952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.818232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:15 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.818250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:61960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.818257] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:4 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.818275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:61968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.818282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:112 cdw0:0 sqhd:0079 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.818299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:61976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.818306] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:74 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.818324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:61984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.818331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:122 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.818348] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:61992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.818355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:6 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.818372] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:62000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.121 [2024-07-25 12:10:12.818379] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:68 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.818396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:61368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-07-25 12:10:12.818403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:72 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.818421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:61376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-07-25 12:10:12.818428] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:63 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.818445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:61384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.121 [2024-07-25 12:10:12.818452] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:107 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:41.121 [2024-07-25 12:10:12.818469] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:61392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.818476] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:61400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.818501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0002 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:61408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.818524] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:33 cdw0:0 sqhd:0003 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818542] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:61416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.818549] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:110 cdw0:0 sqhd:0004 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818566] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:61424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.818575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:71 cdw0:0 sqhd:0005 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:62008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.122 [2024-07-25 12:10:12.818599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:83 cdw0:0 sqhd:0006 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:62016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.122 [2024-07-25 12:10:12.818670] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:97 cdw0:0 sqhd:0007 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818689] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:73 nsid:1 lba:62024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.122 [2024-07-25 12:10:12.818696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:73 cdw0:0 sqhd:0008 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:62032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.122 [2024-07-25 12:10:12.818721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:61 cdw0:0 sqhd:0009 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818739] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:62040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.122 [2024-07-25 12:10:12.818746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:39 cdw0:0 sqhd:000a p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818764] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:62048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.122 [2024-07-25 12:10:12.818771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:46 cdw0:0 sqhd:000b p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:62056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.122 [2024-07-25 12:10:12.818796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:17 cdw0:0 sqhd:000c p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:76 nsid:1 lba:62064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.122 [2024-07-25 12:10:12.818821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:76 cdw0:0 sqhd:000d p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818839] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:61432 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.122 [2024-07-25 12:10:12.818846] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:19 cdw0:0 sqhd:000e p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:69 nsid:1 lba:61440 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.122 [2024-07-25 12:10:12.818871] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:69 cdw0:0 sqhd:000f p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:61448 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.122 [2024-07-25 12:10:12.818897] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:10 cdw0:0 sqhd:0010 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818916] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:116 nsid:1 lba:61456 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.122 [2024-07-25 12:10:12.818926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:116 cdw0:0 sqhd:0011 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818944] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:61464 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.122 [2024-07-25 12:10:12.818951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:87 cdw0:0 sqhd:0012 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:61472 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.122 [2024-07-25 12:10:12.818977] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:115 cdw0:0 sqhd:0013 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.818995] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:61480 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.122 [2024-07-25 12:10:12.819002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:14 cdw0:0 sqhd:0014 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.819022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:49 nsid:1 lba:61488 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.122 [2024-07-25 12:10:12.819029] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:49 cdw0:0 sqhd:0015 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.819199] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:21 nsid:1 lba:61048 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.819207] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:21 cdw0:0 sqhd:0016 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.819226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:61056 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.819233] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:80 cdw0:0 sqhd:0017 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.819251] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:61064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.819259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:113 cdw0:0 sqhd:0018 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.819277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:61072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.819284] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:38 cdw0:0 sqhd:0019 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.819303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:61080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.819310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:9 cdw0:0 sqhd:001a p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.819328] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:61088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.819336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:117 cdw0:0 sqhd:001b p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.819354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:61096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.819361] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:77 cdw0:0 sqhd:001c p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.819380] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:61104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.819387] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:001d p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.819407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:61112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.819414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:28 cdw0:0 sqhd:001e p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.819433] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:61120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.819440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:32 cdw0:0 sqhd:001f p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.819458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:61128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.819465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:0020 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.819486] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:61136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.819496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:94 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.819514] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:61144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.819521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:42 cdw0:0 sqhd:0022 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.819541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:61152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.819548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:43 cdw0:0 sqhd:0023 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.819567] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:61160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.819574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:64 cdw0:0 sqhd:0024 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.819592] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:58 nsid:1 lba:61168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.122 [2024-07-25 12:10:12.819601] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:58 cdw0:0 sqhd:0025 p:0 m:0 dnr:0 00:24:41.122 [2024-07-25 12:10:12.819620] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:61496 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.123 [2024-07-25 12:10:12.819626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:50 cdw0:0 sqhd:0026 p:0 m:0 dnr:0 00:24:41.123 [2024-07-25 12:10:25.690970] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:118576 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.123 [2024-07-25 12:10:25.691012] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:123 cdw0:0 sqhd:006d p:0 m:0 dnr:0 00:24:41.123 [2024-07-25 12:10:25.691066] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:118608 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.123 [2024-07-25 12:10:25.691075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:44 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:24:41.123 [2024-07-25 12:10:25.691088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:120 nsid:1 lba:118672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.123 [2024-07-25 12:10:25.691096] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:120 cdw0:0 sqhd:006f p:0 m:0 dnr:0 00:24:41.123 [2024-07-25 12:10:25.691113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:118584 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.123 [2024-07-25 12:10:25.691120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:51 cdw0:0 sqhd:0070 p:0 m:0 dnr:0 00:24:41.123 [2024-07-25 12:10:25.691133] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:118616 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.123 [2024-07-25 12:10:25.691140] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:98 cdw0:0 sqhd:0071 p:0 m:0 dnr:0 00:24:41.123 [2024-07-25 12:10:25.692882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:118640 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:24:41.123 [2024-07-25 12:10:25.692903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:99 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:24:41.123 [2024-07-25 12:10:25.695337] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:47 nsid:1 lba:118688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.123 [2024-07-25 12:10:25.695358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:47 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:24:41.123 [2024-07-25 12:10:25.695375] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:100 nsid:1 lba:118704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.123 [2024-07-25 12:10:25.695383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:100 cdw0:0 sqhd:0074 p:0 m:0 dnr:0 00:24:41.123 [2024-07-25 12:10:25.695396] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:118720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.123 [2024-07-25 12:10:25.695403] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:66 cdw0:0 sqhd:0075 p:0 m:0 dnr:0 00:24:41.123 [2024-07-25 12:10:25.695416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:118736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.123 [2024-07-25 12:10:25.695422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:25 cdw0:0 sqhd:0076 p:0 m:0 dnr:0 00:24:41.123 [2024-07-25 12:10:25.695435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:118752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:24:41.123 [2024-07-25 12:10:25.695442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ASYMMETRIC ACCESS INACCESSIBLE (03/02) qid:1 cid:93 cdw0:0 sqhd:0077 p:0 m:0 dnr:0 00:24:41.123 Received shutdown signal, test time was about 27.193727 seconds 00:24:41.123 00:24:41.123 Latency(us) 00:24:41.123 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:24:41.123 Job: Nvme0n1 (Core Mask 0x4, workload: verify, depth: 128, IO size: 4096) 00:24:41.123 Verification LBA range: start 0x0 length 0x4000 00:24:41.123 Nvme0n1 : 27.19 10530.88 41.14 0.00 0.00 12133.12 559.19 3092843.30 00:24:41.123 =================================================================================================================== 00:24:41.123 Total : 10530.88 41.14 0.00 0.00 12133.12 559.19 3092843.30 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@143 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@145 -- # trap - SIGINT SIGTERM EXIT 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@147 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/try.txt 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- host/multipath_status.sh@148 -- # nvmftestfini 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@488 -- # nvmfcleanup 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@117 -- # sync 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@120 -- # set +e 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@121 -- # for i in {1..20} 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:24:41.123 rmmod nvme_tcp 00:24:41.123 rmmod nvme_fabrics 00:24:41.123 rmmod nvme_keyring 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@124 -- # set -e 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@125 -- # return 0 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@489 -- # '[' -n 435486 ']' 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@490 -- # killprocess 435486 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@948 -- # '[' -z 435486 ']' 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@952 -- # kill -0 435486 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # uname 00:24:41.123 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:24:41.383 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 435486 00:24:41.383 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:24:41.383 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:24:41.383 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@966 -- # echo 'killing process with pid 435486' 00:24:41.383 killing process with pid 435486 00:24:41.383 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@967 -- # kill 435486 00:24:41.383 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@972 -- # wait 435486 00:24:41.383 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:24:41.383 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:24:41.383 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:24:41.383 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:24:41.383 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@278 -- # remove_spdk_ns 00:24:41.383 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:41.383 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:41.383 12:10:28 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.978 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:24:43.978 00:24:43.978 real 0m39.103s 00:24:43.978 user 1m46.237s 00:24:43.978 sys 0m10.538s 00:24:43.978 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@1124 -- # xtrace_disable 00:24:43.978 12:10:30 nvmf_tcp.nvmf_host.nvmf_host_multipath_status -- common/autotest_common.sh@10 -- # set +x 00:24:43.978 ************************************ 00:24:43.979 END TEST nvmf_host_multipath_status 00:24:43.979 ************************************ 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@28 -- # run_test nvmf_discovery_remove_ifc /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:24:43.979 ************************************ 00:24:43.979 START TEST nvmf_discovery_remove_ifc 00:24:43.979 ************************************ 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/discovery_remove_ifc.sh --transport=tcp 00:24:43.979 * Looking for test storage... 00:24:43.979 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # uname -s 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@5 -- # export PATH 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@47 -- # : 0 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@51 -- # have_pci_nics=0 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@14 -- # '[' tcp == rdma ']' 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@19 -- # discovery_port=8009 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@20 -- # discovery_nqn=nqn.2014-08.org.nvmexpress.discovery 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@23 -- # nqn=nqn.2016-06.io.spdk:cnode 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@25 -- # host_nqn=nqn.2021-12.io.spdk:test 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@26 -- # host_sock=/tmp/host.sock 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@39 -- # nvmftestinit 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@448 -- # prepare_net_devs 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@410 -- # local -g is_hw=no 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@412 -- # remove_spdk_ns 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@285 -- # xtrace_disable 00:24:43.979 12:10:30 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.259 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:24:49.259 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # pci_devs=() 00:24:49.259 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@291 -- # local -a pci_devs 00:24:49.259 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # pci_net_devs=() 00:24:49.259 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:24:49.259 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # pci_drivers=() 00:24:49.259 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@293 -- # local -A pci_drivers 00:24:49.259 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # net_devs=() 00:24:49.259 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@295 -- # local -ga net_devs 00:24:49.259 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # e810=() 00:24:49.259 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@296 -- # local -ga e810 00:24:49.259 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # x722=() 00:24:49.259 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@297 -- # local -ga x722 00:24:49.259 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # mlx=() 00:24:49.259 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@298 -- # local -ga mlx 00:24:49.259 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:24:49.259 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:24:49.260 Found 0000:86:00.0 (0x8086 - 0x159b) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:24:49.260 Found 0000:86:00.1 (0x8086 - 0x159b) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:24:49.260 Found net devices under 0000:86:00.0: cvl_0_0 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@390 -- # [[ up == up ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:24:49.260 Found net devices under 0000:86:00.1: cvl_0_1 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@414 -- # is_hw=yes 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:24:49.260 12:10:35 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:24:49.260 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:24:49.260 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.263 ms 00:24:49.260 00:24:49.260 --- 10.0.0.2 ping statistics --- 00:24:49.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.260 rtt min/avg/max/mdev = 0.263/0.263/0.263/0.000 ms 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:24:49.260 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:24:49.260 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.434 ms 00:24:49.260 00:24:49.260 --- 10.0.0.1 ping statistics --- 00:24:49.260 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:24:49.260 rtt min/avg/max/mdev = 0.434/0.434/0.434/0.000 ms 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@422 -- # return 0 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@40 -- # nvmfappstart -m 0x2 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@722 -- # xtrace_disable 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@481 -- # nvmfpid=444269 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@482 -- # waitforlisten 444269 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0x2 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 444269 ']' 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:24:49.260 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:49.261 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:24:49.261 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:24:49.261 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:49.261 12:10:36 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.261 [2024-07-25 12:10:36.249995] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:24:49.261 [2024-07-25 12:10:36.250040] [ DPDK EAL parameters: nvmf -c 0x2 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:24:49.261 EAL: No free 2048 kB hugepages reported on node 1 00:24:49.261 [2024-07-25 12:10:36.307268] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:49.261 [2024-07-25 12:10:36.387452] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:24:49.261 [2024-07-25 12:10:36.387484] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:24:49.261 [2024-07-25 12:10:36.387491] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:24:49.261 [2024-07-25 12:10:36.387497] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:24:49.261 [2024-07-25 12:10:36.387503] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:24:49.261 [2024-07-25 12:10:36.387519] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:24:49.828 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:49.828 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:49.828 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:24:49.828 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@728 -- # xtrace_disable 00:24:49.828 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:49.828 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:24:49.828 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@43 -- # rpc_cmd 00:24:49.828 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:49.828 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.088 [2024-07-25 12:10:37.082857] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:24:50.088 [2024-07-25 12:10:37.090978] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 8009 *** 00:24:50.088 null0 00:24:50.088 [2024-07-25 12:10:37.122990] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:24:50.088 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:50.088 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@59 -- # hostpid=444412 00:24:50.088 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@60 -- # waitforlisten 444412 /tmp/host.sock 00:24:50.088 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@58 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x1 -r /tmp/host.sock --wait-for-rpc -L bdev_nvme 00:24:50.088 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@829 -- # '[' -z 444412 ']' 00:24:50.088 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@833 -- # local rpc_addr=/tmp/host.sock 00:24:50.088 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@834 -- # local max_retries=100 00:24:50.088 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock...' 00:24:50.088 Waiting for process to start up and listen on UNIX domain socket /tmp/host.sock... 00:24:50.088 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@838 -- # xtrace_disable 00:24:50.088 12:10:37 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:50.088 [2024-07-25 12:10:37.191740] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:24:50.088 [2024-07-25 12:10:37.191782] [ DPDK EAL parameters: nvmf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid444412 ] 00:24:50.088 EAL: No free 2048 kB hugepages reported on node 1 00:24:50.088 [2024-07-25 12:10:37.245764] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:50.088 [2024-07-25 12:10:37.325237] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:24:51.026 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:24:51.026 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@862 -- # return 0 00:24:51.026 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@62 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; killprocess $hostpid; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:24:51.026 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@65 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_set_options -e 1 00:24:51.026 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.026 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.026 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.026 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@66 -- # rpc_cmd -s /tmp/host.sock framework_start_init 00:24:51.026 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.026 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.026 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:51.026 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@69 -- # rpc_cmd -s /tmp/host.sock bdev_nvme_start_discovery -b nvme -t tcp -a 10.0.0.2 -s 8009 -f ipv4 -q nqn.2021-12.io.spdk:test --ctrlr-loss-timeout-sec 2 --reconnect-delay-sec 1 --fast-io-fail-timeout-sec 1 --wait-for-attach 00:24:51.026 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:51.026 12:10:38 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:51.964 [2024-07-25 12:10:39.143210] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:24:51.964 [2024-07-25 12:10:39.143231] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:24:51.964 [2024-07-25 12:10:39.143246] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:24:52.223 [2024-07-25 12:10:39.230568] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme0 00:24:52.223 [2024-07-25 12:10:39.293938] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:24:52.223 [2024-07-25 12:10:39.293983] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:24:52.223 [2024-07-25 12:10:39.294002] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:24:52.223 [2024-07-25 12:10:39.294015] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme0 done 00:24:52.223 [2024-07-25 12:10:39.294033] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:24:52.223 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.223 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@72 -- # wait_for_bdev nvme0n1 00:24:52.223 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:52.223 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:52.223 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.223 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.223 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:52.223 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.223 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:52.223 [2024-07-25 12:10:39.302521] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x19aee60 was disconnected and freed. delete nvme_qpair. 00:24:52.223 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.223 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != \n\v\m\e\0\n\1 ]] 00:24:52.223 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@75 -- # ip netns exec cvl_0_0_ns_spdk ip addr del 10.0.0.2/24 dev cvl_0_0 00:24:52.223 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@76 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 down 00:24:52.223 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@79 -- # wait_for_bdev '' 00:24:52.223 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:52.223 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:52.223 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:52.224 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:52.224 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:52.224 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:52.224 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:52.224 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:52.483 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:52.483 12:10:39 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:53.421 12:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:53.421 12:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:53.421 12:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:53.421 12:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:53.421 12:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:53.421 12:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:53.421 12:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:53.422 12:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:53.422 12:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:53.422 12:10:40 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:54.357 12:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:54.357 12:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:54.357 12:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:54.357 12:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:54.357 12:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:54.357 12:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:54.357 12:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:54.357 12:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:54.357 12:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:54.357 12:10:41 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:55.733 12:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:55.733 12:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:55.733 12:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:55.733 12:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:55.733 12:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:55.733 12:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:55.733 12:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:55.733 12:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:55.733 12:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:55.733 12:10:42 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:56.670 12:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:56.670 12:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:56.670 12:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:56.670 12:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:56.670 12:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:56.670 12:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:56.670 12:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:56.670 12:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:56.670 12:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:56.670 12:10:43 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:57.607 12:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:57.607 12:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:57.607 12:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:57.607 12:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:57.607 12:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:57.607 12:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:57.607 12:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:57.607 12:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:57.607 [2024-07-25 12:10:44.735243] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 110: Connection timed out 00:24:57.607 [2024-07-25 12:10:44.735281] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.607 [2024-07-25 12:10:44.735291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.607 [2024-07-25 12:10:44.735300] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.607 [2024-07-25 12:10:44.735307] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.607 [2024-07-25 12:10:44.735314] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.607 [2024-07-25 12:10:44.735325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.607 [2024-07-25 12:10:44.735332] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.607 [2024-07-25 12:10:44.735338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.607 [2024-07-25 12:10:44.735345] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:57.607 [2024-07-25 12:10:44.735352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:57.607 [2024-07-25 12:10:44.735358] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19756b0 is same with the state(5) to be set 00:24:57.607 [2024-07-25 12:10:44.745266] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19756b0 (9): Bad file descriptor 00:24:57.607 12:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:57.607 12:10:44 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:57.607 [2024-07-25 12:10:44.755303] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:58.543 12:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:58.543 12:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:58.543 12:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:58.543 12:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:58.543 12:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:58.543 12:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:58.543 12:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:58.543 [2024-07-25 12:10:45.762060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 110 00:24:58.543 [2024-07-25 12:10:45.762095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x19756b0 with addr=10.0.0.2, port=4420 00:24:58.543 [2024-07-25 12:10:45.762108] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x19756b0 is same with the state(5) to be set 00:24:58.543 [2024-07-25 12:10:45.762141] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x19756b0 (9): Bad file descriptor 00:24:58.543 [2024-07-25 12:10:45.762540] bdev_nvme.c:2899:bdev_nvme_failover_ctrlr_unsafe: *NOTICE*: Unable to perform failover, already in progress. 00:24:58.543 [2024-07-25 12:10:45.762566] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:58.543 [2024-07-25 12:10:45.762576] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:58.543 [2024-07-25 12:10:45.762587] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:58.543 [2024-07-25 12:10:45.762603] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:58.543 [2024-07-25 12:10:45.762613] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] resetting controller 00:24:58.543 12:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:58.803 12:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme0n1 != '' ]] 00:24:58.803 12:10:45 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:24:59.740 [2024-07-25 12:10:46.765094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:24:59.740 [2024-07-25 12:10:46.765117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:24:59.740 [2024-07-25 12:10:46.765125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode0] controller reinitialization failed 00:24:59.740 [2024-07-25 12:10:46.765131] nvme_ctrlr.c:1094:nvme_ctrlr_fail: *NOTICE*: [nqn.2016-06.io.spdk:cnode0] already in failed state 00:24:59.740 [2024-07-25 12:10:46.765158] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:24:59.740 [2024-07-25 12:10:46.765175] bdev_nvme.c:6762:remove_discovery_entry: *INFO*: Discovery[10.0.0.2:8009] Remove discovery entry: nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 00:24:59.741 [2024-07-25 12:10:46.765192] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:0 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.741 [2024-07-25 12:10:46.765200] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.741 [2024-07-25 12:10:46.765209] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:1 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.741 [2024-07-25 12:10:46.765216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.741 [2024-07-25 12:10:46.765222] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:2 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.741 [2024-07-25 12:10:46.765229] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.741 [2024-07-25 12:10:46.765236] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:3 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.741 [2024-07-25 12:10:46.765243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.741 [2024-07-25 12:10:46.765250] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: KEEP ALIVE (18) qid:0 cid:4 nsid:0 cdw10:00000000 cdw11:00000000 00:24:59.741 [2024-07-25 12:10:46.765259] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:24:59.741 [2024-07-25 12:10:46.765266] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] in failed state. 00:24:59.741 [2024-07-25 12:10:46.765370] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x1974a80 (9): Bad file descriptor 00:24:59.741 [2024-07-25 12:10:46.766382] nvme_fabric.c: 214:nvme_fabric_prop_get_cmd_async: *ERROR*: Failed to send Property Get fabrics command 00:24:59.741 [2024-07-25 12:10:46.766392] nvme_ctrlr.c:1213:nvme_ctrlr_shutdown_async: *ERROR*: [nqn.2014-08.org.nvmexpress.discovery] Failed to read the CC register 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != '' ]] 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@82 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@83 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@86 -- # wait_for_bdev nvme1n1 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:24:59.741 12:10:46 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:01.121 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:01.121 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.121 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:01.121 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.121 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:01.121 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:01.121 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:01.121 12:10:47 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.121 12:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:01.121 12:10:48 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:01.688 [2024-07-25 12:10:48.818066] bdev_nvme.c:7011:discovery_attach_cb: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr attached 00:25:01.688 [2024-07-25 12:10:48.818082] bdev_nvme.c:7091:discovery_poller: *INFO*: Discovery[10.0.0.2:8009] discovery ctrlr connected 00:25:01.688 [2024-07-25 12:10:48.818096] bdev_nvme.c:6974:get_discovery_log_page: *INFO*: Discovery[10.0.0.2:8009] sent discovery log page command 00:25:01.947 [2024-07-25 12:10:48.946491] bdev_nvme.c:6940:discovery_log_page_cb: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 new subsystem nvme1 00:25:01.947 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:01.947 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:01.947 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:01.947 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:01.947 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:01.947 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:01.947 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:01.947 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:01.947 [2024-07-25 12:10:49.088475] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 8 blocks with offset 0 00:25:01.947 [2024-07-25 12:10:49.088512] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 1 blocks with offset 0 00:25:01.947 [2024-07-25 12:10:49.088530] bdev_nvme.c:7801:bdev_nvme_readv: *DEBUG*: read 64 blocks with offset 0 00:25:01.947 [2024-07-25 12:10:49.088547] bdev_nvme.c:6830:discovery_attach_controller_done: *INFO*: Discovery[10.0.0.2:8009] attach nvme1 done 00:25:01.947 [2024-07-25 12:10:49.088555] bdev_nvme.c:6789:discovery_remove_controllers: *INFO*: Discovery[10.0.0.2:8009] NVM nqn.2016-06.io.spdk:cnode0:10.0.0.2:4420 found again 00:25:01.947 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ '' != \n\v\m\e\1\n\1 ]] 00:25:01.947 12:10:49 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@34 -- # sleep 1 00:25:01.947 [2024-07-25 12:10:49.097242] bdev_nvme.c:1617:bdev_nvme_disconnected_qpair_cb: *DEBUG*: qpair 0x197c180 was disconnected and freed. delete nvme_qpair. 00:25:02.885 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # get_bdev_list 00:25:02.885 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # rpc_cmd -s /tmp/host.sock bdev_get_bdevs 00:25:02.885 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # jq -r '.[].name' 00:25:02.885 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:02.885 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # sort 00:25:02.885 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:02.885 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@29 -- # xargs 00:25:02.885 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@33 -- # [[ nvme1n1 != \n\v\m\e\1\n\1 ]] 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@88 -- # trap - SIGINT SIGTERM EXIT 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@90 -- # killprocess 444412 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 444412 ']' 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 444412 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 444412 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 444412' 00:25:03.144 killing process with pid 444412 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 444412 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 444412 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- host/discovery_remove_ifc.sh@91 -- # nvmftestfini 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@117 -- # sync 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@120 -- # set +e 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:03.144 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:03.144 rmmod nvme_tcp 00:25:03.404 rmmod nvme_fabrics 00:25:03.404 rmmod nvme_keyring 00:25:03.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:03.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@124 -- # set -e 00:25:03.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@125 -- # return 0 00:25:03.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@489 -- # '[' -n 444269 ']' 00:25:03.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@490 -- # killprocess 444269 00:25:03.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@948 -- # '[' -z 444269 ']' 00:25:03.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@952 -- # kill -0 444269 00:25:03.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # uname 00:25:03.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:03.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 444269 00:25:03.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:25:03.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:25:03.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@966 -- # echo 'killing process with pid 444269' 00:25:03.404 killing process with pid 444269 00:25:03.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@967 -- # kill 444269 00:25:03.404 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@972 -- # wait 444269 00:25:03.663 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:03.663 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:03.663 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:03.663 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:03.663 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:03.663 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:03.663 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:03.663 12:10:50 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.620 12:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:05.620 00:25:05.620 real 0m22.046s 00:25:05.620 user 0m28.586s 00:25:05.620 sys 0m5.419s 00:25:05.620 12:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:05.620 12:10:52 nvmf_tcp.nvmf_host.nvmf_discovery_remove_ifc -- common/autotest_common.sh@10 -- # set +x 00:25:05.620 ************************************ 00:25:05.620 END TEST nvmf_discovery_remove_ifc 00:25:05.620 ************************************ 00:25:05.620 12:10:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:25:05.620 12:10:52 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@29 -- # run_test nvmf_identify_kernel_target /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:05.620 12:10:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:05.620 12:10:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:05.620 12:10:52 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:05.620 ************************************ 00:25:05.620 START TEST nvmf_identify_kernel_target 00:25:05.620 ************************************ 00:25:05.620 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/identify_kernel_nvmf.sh --transport=tcp 00:25:05.880 * Looking for test storage... 00:25:05.880 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # uname -s 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:05.880 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@5 -- # export PATH 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@47 -- # : 0 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@11 -- # nvmftestinit 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@285 -- # xtrace_disable 00:25:05.881 12:10:52 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # pci_devs=() 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # net_devs=() 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # e810=() 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@296 -- # local -ga e810 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # x722=() 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@297 -- # local -ga x722 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # mlx=() 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@298 -- # local -ga mlx 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:11.157 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:11.158 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:11.158 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:11.158 Found net devices under 0000:86:00.0: cvl_0_0 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:11.158 Found net devices under 0000:86:00.1: cvl_0_1 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@414 -- # is_hw=yes 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:11.158 12:10:57 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:11.158 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:11.158 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:11.158 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:11.158 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:11.158 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:11.158 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:11.158 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:11.159 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:11.159 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.195 ms 00:25:11.159 00:25:11.159 --- 10.0.0.2 ping statistics --- 00:25:11.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.159 rtt min/avg/max/mdev = 0.195/0.195/0.195/0.000 ms 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:11.159 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:11.159 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.370 ms 00:25:11.159 00:25:11.159 --- 10.0.0.1 ping statistics --- 00:25:11.159 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:11.159 rtt min/avg/max/mdev = 0.370/0.370/0.370/0.000 ms 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@422 -- # return 0 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@13 -- # trap 'nvmftestfini || :; clean_kernel_target' EXIT 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # get_main_ns_ip 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@741 -- # local ip 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@15 -- # target_ip=10.0.0.1 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@16 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@639 -- # local block nvme 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:11.159 12:10:58 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:13.698 Waiting for block devices as requested 00:25:13.698 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:13.698 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:13.698 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:13.958 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:13.958 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:13.958 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:13.958 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:14.218 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:14.218 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:14.218 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:14.476 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:14.476 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:14.476 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:14.476 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:14.735 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:14.735 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:14.735 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:14.735 12:11:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:14.735 12:11:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:14.735 12:11:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:14.735 12:11:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:14.735 12:11:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:14.735 12:11:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:14.735 12:11:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:14.735 12:11:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:14.735 12:11:01 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:14.995 No valid GPT data, bailing 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@391 -- # pt= 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- scripts/common.sh@392 -- # return 1 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@667 -- # echo 1 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@669 -- # echo 1 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@672 -- # echo tcp 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@673 -- # echo 4420 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@674 -- # echo ipv4 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:14.995 00:25:14.995 Discovery Log Number of Records 2, Generation counter 2 00:25:14.995 =====Discovery Log Entry 0====== 00:25:14.995 trtype: tcp 00:25:14.995 adrfam: ipv4 00:25:14.995 subtype: current discovery subsystem 00:25:14.995 treq: not specified, sq flow control disable supported 00:25:14.995 portid: 1 00:25:14.995 trsvcid: 4420 00:25:14.995 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:14.995 traddr: 10.0.0.1 00:25:14.995 eflags: none 00:25:14.995 sectype: none 00:25:14.995 =====Discovery Log Entry 1====== 00:25:14.995 trtype: tcp 00:25:14.995 adrfam: ipv4 00:25:14.995 subtype: nvme subsystem 00:25:14.995 treq: not specified, sq flow control disable supported 00:25:14.995 portid: 1 00:25:14.995 trsvcid: 4420 00:25:14.995 subnqn: nqn.2016-06.io.spdk:testnqn 00:25:14.995 traddr: 10.0.0.1 00:25:14.995 eflags: none 00:25:14.995 sectype: none 00:25:14.995 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 00:25:14.995 trsvcid:4420 subnqn:nqn.2014-08.org.nvmexpress.discovery' 00:25:14.995 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.995 ===================================================== 00:25:14.995 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2014-08.org.nvmexpress.discovery 00:25:14.995 ===================================================== 00:25:14.995 Controller Capabilities/Features 00:25:14.995 ================================ 00:25:14.995 Vendor ID: 0000 00:25:14.995 Subsystem Vendor ID: 0000 00:25:14.995 Serial Number: 2d0c371e952785e3bf8a 00:25:14.995 Model Number: Linux 00:25:14.995 Firmware Version: 6.7.0-68 00:25:14.995 Recommended Arb Burst: 0 00:25:14.995 IEEE OUI Identifier: 00 00 00 00:25:14.995 Multi-path I/O 00:25:14.995 May have multiple subsystem ports: No 00:25:14.995 May have multiple controllers: No 00:25:14.995 Associated with SR-IOV VF: No 00:25:14.995 Max Data Transfer Size: Unlimited 00:25:14.995 Max Number of Namespaces: 0 00:25:14.995 Max Number of I/O Queues: 1024 00:25:14.995 NVMe Specification Version (VS): 1.3 00:25:14.995 NVMe Specification Version (Identify): 1.3 00:25:14.995 Maximum Queue Entries: 1024 00:25:14.995 Contiguous Queues Required: No 00:25:14.995 Arbitration Mechanisms Supported 00:25:14.995 Weighted Round Robin: Not Supported 00:25:14.995 Vendor Specific: Not Supported 00:25:14.995 Reset Timeout: 7500 ms 00:25:14.995 Doorbell Stride: 4 bytes 00:25:14.995 NVM Subsystem Reset: Not Supported 00:25:14.995 Command Sets Supported 00:25:14.995 NVM Command Set: Supported 00:25:14.995 Boot Partition: Not Supported 00:25:14.996 Memory Page Size Minimum: 4096 bytes 00:25:14.996 Memory Page Size Maximum: 4096 bytes 00:25:14.996 Persistent Memory Region: Not Supported 00:25:14.996 Optional Asynchronous Events Supported 00:25:14.996 Namespace Attribute Notices: Not Supported 00:25:14.996 Firmware Activation Notices: Not Supported 00:25:14.996 ANA Change Notices: Not Supported 00:25:14.996 PLE Aggregate Log Change Notices: Not Supported 00:25:14.996 LBA Status Info Alert Notices: Not Supported 00:25:14.996 EGE Aggregate Log Change Notices: Not Supported 00:25:14.996 Normal NVM Subsystem Shutdown event: Not Supported 00:25:14.996 Zone Descriptor Change Notices: Not Supported 00:25:14.996 Discovery Log Change Notices: Supported 00:25:14.996 Controller Attributes 00:25:14.996 128-bit Host Identifier: Not Supported 00:25:14.996 Non-Operational Permissive Mode: Not Supported 00:25:14.996 NVM Sets: Not Supported 00:25:14.996 Read Recovery Levels: Not Supported 00:25:14.996 Endurance Groups: Not Supported 00:25:14.996 Predictable Latency Mode: Not Supported 00:25:14.996 Traffic Based Keep ALive: Not Supported 00:25:14.996 Namespace Granularity: Not Supported 00:25:14.996 SQ Associations: Not Supported 00:25:14.996 UUID List: Not Supported 00:25:14.996 Multi-Domain Subsystem: Not Supported 00:25:14.996 Fixed Capacity Management: Not Supported 00:25:14.996 Variable Capacity Management: Not Supported 00:25:14.996 Delete Endurance Group: Not Supported 00:25:14.996 Delete NVM Set: Not Supported 00:25:14.996 Extended LBA Formats Supported: Not Supported 00:25:14.996 Flexible Data Placement Supported: Not Supported 00:25:14.996 00:25:14.996 Controller Memory Buffer Support 00:25:14.996 ================================ 00:25:14.996 Supported: No 00:25:14.996 00:25:14.996 Persistent Memory Region Support 00:25:14.996 ================================ 00:25:14.996 Supported: No 00:25:14.996 00:25:14.996 Admin Command Set Attributes 00:25:14.996 ============================ 00:25:14.996 Security Send/Receive: Not Supported 00:25:14.996 Format NVM: Not Supported 00:25:14.996 Firmware Activate/Download: Not Supported 00:25:14.996 Namespace Management: Not Supported 00:25:14.996 Device Self-Test: Not Supported 00:25:14.996 Directives: Not Supported 00:25:14.996 NVMe-MI: Not Supported 00:25:14.996 Virtualization Management: Not Supported 00:25:14.996 Doorbell Buffer Config: Not Supported 00:25:14.996 Get LBA Status Capability: Not Supported 00:25:14.996 Command & Feature Lockdown Capability: Not Supported 00:25:14.996 Abort Command Limit: 1 00:25:14.996 Async Event Request Limit: 1 00:25:14.996 Number of Firmware Slots: N/A 00:25:14.996 Firmware Slot 1 Read-Only: N/A 00:25:14.996 Firmware Activation Without Reset: N/A 00:25:14.996 Multiple Update Detection Support: N/A 00:25:14.996 Firmware Update Granularity: No Information Provided 00:25:14.996 Per-Namespace SMART Log: No 00:25:14.996 Asymmetric Namespace Access Log Page: Not Supported 00:25:14.996 Subsystem NQN: nqn.2014-08.org.nvmexpress.discovery 00:25:14.996 Command Effects Log Page: Not Supported 00:25:14.996 Get Log Page Extended Data: Supported 00:25:14.996 Telemetry Log Pages: Not Supported 00:25:14.996 Persistent Event Log Pages: Not Supported 00:25:14.996 Supported Log Pages Log Page: May Support 00:25:14.996 Commands Supported & Effects Log Page: Not Supported 00:25:14.996 Feature Identifiers & Effects Log Page:May Support 00:25:14.996 NVMe-MI Commands & Effects Log Page: May Support 00:25:14.996 Data Area 4 for Telemetry Log: Not Supported 00:25:14.996 Error Log Page Entries Supported: 1 00:25:14.996 Keep Alive: Not Supported 00:25:14.996 00:25:14.996 NVM Command Set Attributes 00:25:14.996 ========================== 00:25:14.996 Submission Queue Entry Size 00:25:14.996 Max: 1 00:25:14.996 Min: 1 00:25:14.996 Completion Queue Entry Size 00:25:14.996 Max: 1 00:25:14.996 Min: 1 00:25:14.996 Number of Namespaces: 0 00:25:14.996 Compare Command: Not Supported 00:25:14.996 Write Uncorrectable Command: Not Supported 00:25:14.996 Dataset Management Command: Not Supported 00:25:14.996 Write Zeroes Command: Not Supported 00:25:14.996 Set Features Save Field: Not Supported 00:25:14.996 Reservations: Not Supported 00:25:14.996 Timestamp: Not Supported 00:25:14.996 Copy: Not Supported 00:25:14.996 Volatile Write Cache: Not Present 00:25:14.996 Atomic Write Unit (Normal): 1 00:25:14.996 Atomic Write Unit (PFail): 1 00:25:14.996 Atomic Compare & Write Unit: 1 00:25:14.996 Fused Compare & Write: Not Supported 00:25:14.996 Scatter-Gather List 00:25:14.996 SGL Command Set: Supported 00:25:14.996 SGL Keyed: Not Supported 00:25:14.996 SGL Bit Bucket Descriptor: Not Supported 00:25:14.996 SGL Metadata Pointer: Not Supported 00:25:14.996 Oversized SGL: Not Supported 00:25:14.996 SGL Metadata Address: Not Supported 00:25:14.996 SGL Offset: Supported 00:25:14.996 Transport SGL Data Block: Not Supported 00:25:14.996 Replay Protected Memory Block: Not Supported 00:25:14.996 00:25:14.996 Firmware Slot Information 00:25:14.996 ========================= 00:25:14.996 Active slot: 0 00:25:14.996 00:25:14.996 00:25:14.996 Error Log 00:25:14.996 ========= 00:25:14.996 00:25:14.996 Active Namespaces 00:25:14.996 ================= 00:25:14.996 Discovery Log Page 00:25:14.996 ================== 00:25:14.996 Generation Counter: 2 00:25:14.996 Number of Records: 2 00:25:14.996 Record Format: 0 00:25:14.996 00:25:14.996 Discovery Log Entry 0 00:25:14.996 ---------------------- 00:25:14.996 Transport Type: 3 (TCP) 00:25:14.996 Address Family: 1 (IPv4) 00:25:14.996 Subsystem Type: 3 (Current Discovery Subsystem) 00:25:14.996 Entry Flags: 00:25:14.996 Duplicate Returned Information: 0 00:25:14.996 Explicit Persistent Connection Support for Discovery: 0 00:25:14.996 Transport Requirements: 00:25:14.996 Secure Channel: Not Specified 00:25:14.996 Port ID: 1 (0x0001) 00:25:14.996 Controller ID: 65535 (0xffff) 00:25:14.996 Admin Max SQ Size: 32 00:25:14.996 Transport Service Identifier: 4420 00:25:14.996 NVM Subsystem Qualified Name: nqn.2014-08.org.nvmexpress.discovery 00:25:14.996 Transport Address: 10.0.0.1 00:25:14.996 Discovery Log Entry 1 00:25:14.996 ---------------------- 00:25:14.996 Transport Type: 3 (TCP) 00:25:14.996 Address Family: 1 (IPv4) 00:25:14.996 Subsystem Type: 2 (NVM Subsystem) 00:25:14.996 Entry Flags: 00:25:14.996 Duplicate Returned Information: 0 00:25:14.996 Explicit Persistent Connection Support for Discovery: 0 00:25:14.996 Transport Requirements: 00:25:14.996 Secure Channel: Not Specified 00:25:14.996 Port ID: 1 (0x0001) 00:25:14.996 Controller ID: 65535 (0xffff) 00:25:14.996 Admin Max SQ Size: 32 00:25:14.996 Transport Service Identifier: 4420 00:25:14.996 NVM Subsystem Qualified Name: nqn.2016-06.io.spdk:testnqn 00:25:14.996 Transport Address: 10.0.0.1 00:25:14.996 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:25:14.996 EAL: No free 2048 kB hugepages reported on node 1 00:25:14.996 get_feature(0x01) failed 00:25:14.996 get_feature(0x02) failed 00:25:14.996 get_feature(0x04) failed 00:25:14.996 ===================================================== 00:25:14.996 NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:25:14.996 ===================================================== 00:25:14.996 Controller Capabilities/Features 00:25:14.996 ================================ 00:25:14.996 Vendor ID: 0000 00:25:14.997 Subsystem Vendor ID: 0000 00:25:14.997 Serial Number: 85fd54f9bba6ef95d935 00:25:14.997 Model Number: SPDK-nqn.2016-06.io.spdk:testnqn 00:25:14.997 Firmware Version: 6.7.0-68 00:25:14.997 Recommended Arb Burst: 6 00:25:14.997 IEEE OUI Identifier: 00 00 00 00:25:14.997 Multi-path I/O 00:25:14.997 May have multiple subsystem ports: Yes 00:25:14.997 May have multiple controllers: Yes 00:25:14.997 Associated with SR-IOV VF: No 00:25:14.997 Max Data Transfer Size: Unlimited 00:25:14.997 Max Number of Namespaces: 1024 00:25:14.997 Max Number of I/O Queues: 128 00:25:14.997 NVMe Specification Version (VS): 1.3 00:25:14.997 NVMe Specification Version (Identify): 1.3 00:25:14.997 Maximum Queue Entries: 1024 00:25:14.997 Contiguous Queues Required: No 00:25:14.997 Arbitration Mechanisms Supported 00:25:14.997 Weighted Round Robin: Not Supported 00:25:14.997 Vendor Specific: Not Supported 00:25:14.997 Reset Timeout: 7500 ms 00:25:14.997 Doorbell Stride: 4 bytes 00:25:14.997 NVM Subsystem Reset: Not Supported 00:25:14.997 Command Sets Supported 00:25:14.997 NVM Command Set: Supported 00:25:14.997 Boot Partition: Not Supported 00:25:14.997 Memory Page Size Minimum: 4096 bytes 00:25:14.997 Memory Page Size Maximum: 4096 bytes 00:25:14.997 Persistent Memory Region: Not Supported 00:25:14.997 Optional Asynchronous Events Supported 00:25:14.997 Namespace Attribute Notices: Supported 00:25:14.997 Firmware Activation Notices: Not Supported 00:25:14.997 ANA Change Notices: Supported 00:25:14.997 PLE Aggregate Log Change Notices: Not Supported 00:25:14.997 LBA Status Info Alert Notices: Not Supported 00:25:14.997 EGE Aggregate Log Change Notices: Not Supported 00:25:14.997 Normal NVM Subsystem Shutdown event: Not Supported 00:25:14.997 Zone Descriptor Change Notices: Not Supported 00:25:14.997 Discovery Log Change Notices: Not Supported 00:25:14.997 Controller Attributes 00:25:14.997 128-bit Host Identifier: Supported 00:25:14.997 Non-Operational Permissive Mode: Not Supported 00:25:14.997 NVM Sets: Not Supported 00:25:14.997 Read Recovery Levels: Not Supported 00:25:14.997 Endurance Groups: Not Supported 00:25:14.997 Predictable Latency Mode: Not Supported 00:25:14.997 Traffic Based Keep ALive: Supported 00:25:14.997 Namespace Granularity: Not Supported 00:25:14.997 SQ Associations: Not Supported 00:25:14.997 UUID List: Not Supported 00:25:14.997 Multi-Domain Subsystem: Not Supported 00:25:14.997 Fixed Capacity Management: Not Supported 00:25:14.997 Variable Capacity Management: Not Supported 00:25:14.997 Delete Endurance Group: Not Supported 00:25:14.997 Delete NVM Set: Not Supported 00:25:14.997 Extended LBA Formats Supported: Not Supported 00:25:14.997 Flexible Data Placement Supported: Not Supported 00:25:14.997 00:25:14.997 Controller Memory Buffer Support 00:25:14.997 ================================ 00:25:14.997 Supported: No 00:25:14.997 00:25:14.997 Persistent Memory Region Support 00:25:14.997 ================================ 00:25:14.997 Supported: No 00:25:14.997 00:25:14.997 Admin Command Set Attributes 00:25:14.997 ============================ 00:25:14.997 Security Send/Receive: Not Supported 00:25:14.997 Format NVM: Not Supported 00:25:14.997 Firmware Activate/Download: Not Supported 00:25:14.997 Namespace Management: Not Supported 00:25:14.997 Device Self-Test: Not Supported 00:25:14.997 Directives: Not Supported 00:25:14.997 NVMe-MI: Not Supported 00:25:14.997 Virtualization Management: Not Supported 00:25:14.997 Doorbell Buffer Config: Not Supported 00:25:14.997 Get LBA Status Capability: Not Supported 00:25:14.997 Command & Feature Lockdown Capability: Not Supported 00:25:14.997 Abort Command Limit: 4 00:25:14.997 Async Event Request Limit: 4 00:25:14.997 Number of Firmware Slots: N/A 00:25:14.997 Firmware Slot 1 Read-Only: N/A 00:25:14.997 Firmware Activation Without Reset: N/A 00:25:14.997 Multiple Update Detection Support: N/A 00:25:14.997 Firmware Update Granularity: No Information Provided 00:25:14.997 Per-Namespace SMART Log: Yes 00:25:14.997 Asymmetric Namespace Access Log Page: Supported 00:25:14.997 ANA Transition Time : 10 sec 00:25:14.997 00:25:14.997 Asymmetric Namespace Access Capabilities 00:25:14.997 ANA Optimized State : Supported 00:25:14.997 ANA Non-Optimized State : Supported 00:25:14.997 ANA Inaccessible State : Supported 00:25:14.997 ANA Persistent Loss State : Supported 00:25:14.997 ANA Change State : Supported 00:25:14.997 ANAGRPID is not changed : No 00:25:14.997 Non-Zero ANAGRPID for NS Mgmt Cmd : Not Supported 00:25:14.997 00:25:14.997 ANA Group Identifier Maximum : 128 00:25:14.997 Number of ANA Group Identifiers : 128 00:25:14.997 Max Number of Allowed Namespaces : 1024 00:25:14.997 Subsystem NQN: nqn.2016-06.io.spdk:testnqn 00:25:14.997 Command Effects Log Page: Supported 00:25:14.997 Get Log Page Extended Data: Supported 00:25:14.997 Telemetry Log Pages: Not Supported 00:25:14.997 Persistent Event Log Pages: Not Supported 00:25:14.997 Supported Log Pages Log Page: May Support 00:25:14.997 Commands Supported & Effects Log Page: Not Supported 00:25:14.997 Feature Identifiers & Effects Log Page:May Support 00:25:14.997 NVMe-MI Commands & Effects Log Page: May Support 00:25:14.997 Data Area 4 for Telemetry Log: Not Supported 00:25:14.997 Error Log Page Entries Supported: 128 00:25:14.997 Keep Alive: Supported 00:25:14.997 Keep Alive Granularity: 1000 ms 00:25:14.997 00:25:14.997 NVM Command Set Attributes 00:25:14.997 ========================== 00:25:14.997 Submission Queue Entry Size 00:25:14.997 Max: 64 00:25:14.997 Min: 64 00:25:14.997 Completion Queue Entry Size 00:25:14.997 Max: 16 00:25:14.997 Min: 16 00:25:14.997 Number of Namespaces: 1024 00:25:14.997 Compare Command: Not Supported 00:25:14.997 Write Uncorrectable Command: Not Supported 00:25:14.997 Dataset Management Command: Supported 00:25:14.997 Write Zeroes Command: Supported 00:25:14.997 Set Features Save Field: Not Supported 00:25:14.997 Reservations: Not Supported 00:25:14.997 Timestamp: Not Supported 00:25:14.997 Copy: Not Supported 00:25:14.997 Volatile Write Cache: Present 00:25:14.997 Atomic Write Unit (Normal): 1 00:25:14.997 Atomic Write Unit (PFail): 1 00:25:14.997 Atomic Compare & Write Unit: 1 00:25:14.997 Fused Compare & Write: Not Supported 00:25:14.997 Scatter-Gather List 00:25:14.997 SGL Command Set: Supported 00:25:14.997 SGL Keyed: Not Supported 00:25:14.997 SGL Bit Bucket Descriptor: Not Supported 00:25:14.997 SGL Metadata Pointer: Not Supported 00:25:14.997 Oversized SGL: Not Supported 00:25:14.997 SGL Metadata Address: Not Supported 00:25:14.997 SGL Offset: Supported 00:25:14.997 Transport SGL Data Block: Not Supported 00:25:14.997 Replay Protected Memory Block: Not Supported 00:25:14.997 00:25:14.997 Firmware Slot Information 00:25:14.997 ========================= 00:25:14.997 Active slot: 0 00:25:14.997 00:25:14.997 Asymmetric Namespace Access 00:25:14.997 =========================== 00:25:14.997 Change Count : 0 00:25:14.997 Number of ANA Group Descriptors : 1 00:25:14.997 ANA Group Descriptor : 0 00:25:14.997 ANA Group ID : 1 00:25:14.997 Number of NSID Values : 1 00:25:14.997 Change Count : 0 00:25:14.997 ANA State : 1 00:25:14.997 Namespace Identifier : 1 00:25:14.997 00:25:14.997 Commands Supported and Effects 00:25:14.997 ============================== 00:25:14.997 Admin Commands 00:25:14.997 -------------- 00:25:14.997 Get Log Page (02h): Supported 00:25:14.997 Identify (06h): Supported 00:25:14.997 Abort (08h): Supported 00:25:14.997 Set Features (09h): Supported 00:25:14.997 Get Features (0Ah): Supported 00:25:14.997 Asynchronous Event Request (0Ch): Supported 00:25:14.997 Keep Alive (18h): Supported 00:25:14.997 I/O Commands 00:25:14.997 ------------ 00:25:14.997 Flush (00h): Supported 00:25:14.997 Write (01h): Supported LBA-Change 00:25:14.997 Read (02h): Supported 00:25:14.997 Write Zeroes (08h): Supported LBA-Change 00:25:14.997 Dataset Management (09h): Supported 00:25:14.997 00:25:14.998 Error Log 00:25:14.998 ========= 00:25:14.998 Entry: 0 00:25:14.998 Error Count: 0x3 00:25:14.998 Submission Queue Id: 0x0 00:25:14.998 Command Id: 0x5 00:25:14.998 Phase Bit: 0 00:25:14.998 Status Code: 0x2 00:25:14.998 Status Code Type: 0x0 00:25:14.998 Do Not Retry: 1 00:25:14.998 Error Location: 0x28 00:25:14.998 LBA: 0x0 00:25:14.998 Namespace: 0x0 00:25:14.998 Vendor Log Page: 0x0 00:25:14.998 ----------- 00:25:14.998 Entry: 1 00:25:14.998 Error Count: 0x2 00:25:14.998 Submission Queue Id: 0x0 00:25:14.998 Command Id: 0x5 00:25:14.998 Phase Bit: 0 00:25:14.998 Status Code: 0x2 00:25:14.998 Status Code Type: 0x0 00:25:14.998 Do Not Retry: 1 00:25:14.998 Error Location: 0x28 00:25:14.998 LBA: 0x0 00:25:14.998 Namespace: 0x0 00:25:14.998 Vendor Log Page: 0x0 00:25:14.998 ----------- 00:25:14.998 Entry: 2 00:25:14.998 Error Count: 0x1 00:25:14.998 Submission Queue Id: 0x0 00:25:14.998 Command Id: 0x4 00:25:14.998 Phase Bit: 0 00:25:14.998 Status Code: 0x2 00:25:14.998 Status Code Type: 0x0 00:25:14.998 Do Not Retry: 1 00:25:14.998 Error Location: 0x28 00:25:14.998 LBA: 0x0 00:25:14.998 Namespace: 0x0 00:25:14.998 Vendor Log Page: 0x0 00:25:14.998 00:25:14.998 Number of Queues 00:25:14.998 ================ 00:25:14.998 Number of I/O Submission Queues: 128 00:25:14.998 Number of I/O Completion Queues: 128 00:25:14.998 00:25:14.998 ZNS Specific Controller Data 00:25:14.998 ============================ 00:25:14.998 Zone Append Size Limit: 0 00:25:14.998 00:25:14.998 00:25:14.998 Active Namespaces 00:25:14.998 ================= 00:25:14.998 get_feature(0x05) failed 00:25:14.998 Namespace ID:1 00:25:14.998 Command Set Identifier: NVM (00h) 00:25:14.998 Deallocate: Supported 00:25:14.998 Deallocated/Unwritten Error: Not Supported 00:25:14.998 Deallocated Read Value: Unknown 00:25:14.998 Deallocate in Write Zeroes: Not Supported 00:25:14.998 Deallocated Guard Field: 0xFFFF 00:25:14.998 Flush: Supported 00:25:14.998 Reservation: Not Supported 00:25:14.998 Namespace Sharing Capabilities: Multiple Controllers 00:25:14.998 Size (in LBAs): 1953525168 (931GiB) 00:25:14.998 Capacity (in LBAs): 1953525168 (931GiB) 00:25:14.998 Utilization (in LBAs): 1953525168 (931GiB) 00:25:14.998 UUID: ea8cfdd2-77b2-4806-816d-3704354efbc6 00:25:14.998 Thin Provisioning: Not Supported 00:25:14.998 Per-NS Atomic Units: Yes 00:25:14.998 Atomic Boundary Size (Normal): 0 00:25:14.998 Atomic Boundary Size (PFail): 0 00:25:14.998 Atomic Boundary Offset: 0 00:25:14.998 NGUID/EUI64 Never Reused: No 00:25:14.998 ANA group ID: 1 00:25:14.998 Namespace Write Protected: No 00:25:14.998 Number of LBA Formats: 1 00:25:14.998 Current LBA Format: LBA Format #00 00:25:14.998 LBA Format #00: Data Size: 512 Metadata Size: 0 00:25:14.998 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # nvmftestfini 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@117 -- # sync 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@120 -- # set +e 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:14.998 rmmod nvme_tcp 00:25:14.998 rmmod nvme_fabrics 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@124 -- # set -e 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@125 -- # return 0 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@489 -- # '[' -n '' ']' 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:14.998 12:11:02 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:17.535 12:11:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:25:17.535 12:11:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- host/identify_kernel_nvmf.sh@1 -- # clean_kernel_target 00:25:17.535 12:11:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:25:17.535 12:11:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@686 -- # echo 0 00:25:17.535 12:11:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:17.535 12:11:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:25:17.535 12:11:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:25:17.535 12:11:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:25:17.535 12:11:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:25:17.535 12:11:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:25:17.535 12:11:04 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:25:20.073 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:20.073 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:20.073 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:20.073 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:20.073 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:20.073 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:20.073 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:20.073 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:20.073 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:25:20.073 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:25:20.073 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:25:20.073 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:25:20.073 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:25:20.073 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:25:20.073 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:25:20.073 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:25:20.641 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:25:20.900 00:25:20.900 real 0m15.093s 00:25:20.900 user 0m3.779s 00:25:20.900 sys 0m7.734s 00:25:20.900 12:11:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@1124 -- # xtrace_disable 00:25:20.900 12:11:07 nvmf_tcp.nvmf_host.nvmf_identify_kernel_target -- common/autotest_common.sh@10 -- # set +x 00:25:20.900 ************************************ 00:25:20.900 END TEST nvmf_identify_kernel_target 00:25:20.900 ************************************ 00:25:20.900 12:11:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:25:20.900 12:11:07 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@30 -- # run_test nvmf_auth_host /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:20.900 12:11:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:25:20.900 12:11:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:25:20.901 12:11:07 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:25:20.901 ************************************ 00:25:20.901 START TEST nvmf_auth_host 00:25:20.901 ************************************ 00:25:20.901 12:11:07 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/auth.sh --transport=tcp 00:25:20.901 * Looking for test storage... 00:25:20.901 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # uname -s 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@5 -- # export PATH 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@47 -- # : 0 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@51 -- # have_pci_nics=0 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@13 -- # digests=("sha256" "sha384" "sha512") 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@16 -- # dhgroups=("ffdhe2048" "ffdhe3072" "ffdhe4096" "ffdhe6144" "ffdhe8192") 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@17 -- # subnqn=nqn.2024-02.io.spdk:cnode0 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@18 -- # hostnqn=nqn.2024-02.io.spdk:host0 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@19 -- # nvmet_subsys=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@20 -- # nvmet_host=/sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # keys=() 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@21 -- # ckeys=() 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@68 -- # nvmftestinit 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@448 -- # prepare_net_devs 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@410 -- # local -g is_hw=no 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@412 -- # remove_spdk_ns 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@285 -- # xtrace_disable 00:25:20.901 12:11:08 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.179 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:25:26.179 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # pci_devs=() 00:25:26.179 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@291 -- # local -a pci_devs 00:25:26.179 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # pci_net_devs=() 00:25:26.179 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # pci_drivers=() 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@293 -- # local -A pci_drivers 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # net_devs=() 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@295 -- # local -ga net_devs 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # e810=() 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@296 -- # local -ga e810 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # x722=() 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@297 -- # local -ga x722 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # mlx=() 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@298 -- # local -ga mlx 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:25:26.180 Found 0000:86:00.0 (0x8086 - 0x159b) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:25:26.180 Found 0000:86:00.1 (0x8086 - 0x159b) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:25:26.180 Found net devices under 0000:86:00.0: cvl_0_0 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@390 -- # [[ up == up ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:25:26.180 Found net devices under 0000:86:00.1: cvl_0_1 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@414 -- # is_hw=yes 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:25:26.180 12:11:12 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:25:26.180 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:25:26.180 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.174 ms 00:25:26.180 00:25:26.180 --- 10.0.0.2 ping statistics --- 00:25:26.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.180 rtt min/avg/max/mdev = 0.174/0.174/0.174/0.000 ms 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:25:26.180 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:25:26.180 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.393 ms 00:25:26.180 00:25:26.180 --- 10.0.0.1 ping statistics --- 00:25:26.180 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:25:26.180 rtt min/avg/max/mdev = 0.393/0.393/0.393/0.000 ms 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@422 -- # return 0 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@69 -- # nvmfappstart -L nvme_auth 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@722 -- # xtrace_disable 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@481 -- # nvmfpid=456284 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@482 -- # waitforlisten 456284 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 456284 ']' 00:25:26.180 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:26.181 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:26.181 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:26.181 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -L nvme_auth 00:25:26.181 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:26.181 12:11:13 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@728 -- # xtrace_disable 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@70 -- # trap 'cat /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log; cleanup' SIGINT SIGTERM EXIT 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key null 32 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=f603d7c79535a807728b9c482dacab11 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.dxQ 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key f603d7c79535a807728b9c482dacab11 0 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 f603d7c79535a807728b9c482dacab11 0 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=f603d7c79535a807728b9c482dacab11 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.dxQ 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.dxQ 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # keys[0]=/tmp/spdk.key-null.dxQ 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # gen_dhchap_key sha512 64 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=5171ccd0a5bdffc5f68817fb1fb2f298d378232b91a3f3d825b5e233e491119b 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.wc4 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 5171ccd0a5bdffc5f68817fb1fb2f298d378232b91a3f3d825b5e233e491119b 3 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 5171ccd0a5bdffc5f68817fb1fb2f298d378232b91a3f3d825b5e233e491119b 3 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=5171ccd0a5bdffc5f68817fb1fb2f298d378232b91a3f3d825b5e233e491119b 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.wc4 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.wc4 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@73 -- # ckeys[0]=/tmp/spdk.key-sha512.wc4 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key null 48 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d722222269a229b3ca4ad723c6799cdec02c9e9f0e3649a0 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.sme 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d722222269a229b3ca4ad723c6799cdec02c9e9f0e3649a0 0 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d722222269a229b3ca4ad723c6799cdec02c9e9f0e3649a0 0 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d722222269a229b3ca4ad723c6799cdec02c9e9f0e3649a0 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.sme 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.sme 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # keys[1]=/tmp/spdk.key-null.sme 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # gen_dhchap_key sha384 48 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=06c8a0f0fc2f2cb0fceb8c68a22c85e232f20aa66dce07fc 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.hfr 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 06c8a0f0fc2f2cb0fceb8c68a22c85e232f20aa66dce07fc 2 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 06c8a0f0fc2f2cb0fceb8c68a22c85e232f20aa66dce07fc 2 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=06c8a0f0fc2f2cb0fceb8c68a22c85e232f20aa66dce07fc 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.hfr 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.hfr 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@74 -- # ckeys[1]=/tmp/spdk.key-sha384.hfr 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:27.118 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=ba642aee44bceb0dcf8d91332a1ea89c 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.6FN 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key ba642aee44bceb0dcf8d91332a1ea89c 1 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 ba642aee44bceb0dcf8d91332a1ea89c 1 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=ba642aee44bceb0dcf8d91332a1ea89c 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.6FN 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.6FN 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # keys[2]=/tmp/spdk.key-sha256.6FN 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # gen_dhchap_key sha256 32 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha256 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2baa21d99bd29225074454c205dc1ea3 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha256.XXX 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha256.Rkb 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2baa21d99bd29225074454c205dc1ea3 1 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2baa21d99bd29225074454c205dc1ea3 1 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2baa21d99bd29225074454c205dc1ea3 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=1 00:25:27.119 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha256.Rkb 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha256.Rkb 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@75 -- # ckeys[2]=/tmp/spdk.key-sha256.Rkb 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key sha384 48 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha384 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=48 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 24 /dev/urandom 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=0e21167468a6e4798e9d179a1877f1ad4e89735aba963831 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha384.XXX 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha384.Hyy 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 0e21167468a6e4798e9d179a1877f1ad4e89735aba963831 2 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 0e21167468a6e4798e9d179a1877f1ad4e89735aba963831 2 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=0e21167468a6e4798e9d179a1877f1ad4e89735aba963831 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=2 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:27.378 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha384.Hyy 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha384.Hyy 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # keys[3]=/tmp/spdk.key-sha384.Hyy 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # gen_dhchap_key null 32 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=null 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=32 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 16 /dev/urandom 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=d4114ddd8f31fb57e5c9a6aeaf77234d 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-null.XXX 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-null.arR 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key d4114ddd8f31fb57e5c9a6aeaf77234d 0 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 d4114ddd8f31fb57e5c9a6aeaf77234d 0 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=d4114ddd8f31fb57e5c9a6aeaf77234d 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=0 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-null.arR 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-null.arR 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@76 -- # ckeys[3]=/tmp/spdk.key-null.arR 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # gen_dhchap_key sha512 64 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@723 -- # local digest len file key 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # digests=(['null']='0' ['sha256']='1' ['sha384']='2' ['sha512']='3') 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@724 -- # local -A digests 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # digest=sha512 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@726 -- # len=64 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # xxd -p -c0 -l 32 /dev/urandom 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@727 -- # key=2524e4e1065175464062523851cdf0abc32736eaa1d081d3fb760f60daf89ca4 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # mktemp -t spdk.key-sha512.XXX 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@728 -- # file=/tmp/spdk.key-sha512.hsW 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@729 -- # format_dhchap_key 2524e4e1065175464062523851cdf0abc32736eaa1d081d3fb760f60daf89ca4 3 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@719 -- # format_key DHHC-1 2524e4e1065175464062523851cdf0abc32736eaa1d081d3fb760f60daf89ca4 3 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@702 -- # local prefix key digest 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # prefix=DHHC-1 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # key=2524e4e1065175464062523851cdf0abc32736eaa1d081d3fb760f60daf89ca4 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@704 -- # digest=3 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@705 -- # python - 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@730 -- # chmod 0600 /tmp/spdk.key-sha512.hsW 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@732 -- # echo /tmp/spdk.key-sha512.hsW 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # keys[4]=/tmp/spdk.key-sha512.hsW 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@77 -- # ckeys[4]= 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@79 -- # waitforlisten 456284 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@829 -- # '[' -z 456284 ']' 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@834 -- # local max_retries=100 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:27.379 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@838 -- # xtrace_disable 00:25:27.379 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.638 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:25:27.638 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@862 -- # return 0 00:25:27.638 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:27.638 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key0 /tmp/spdk.key-null.dxQ 00:25:27.638 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.638 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.638 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.638 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha512.wc4 ]] 00:25:27.638 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey0 /tmp/spdk.key-sha512.wc4 00:25:27.638 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.638 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.638 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.638 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:27.638 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key1 /tmp/spdk.key-null.sme 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha384.hfr ]] 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey1 /tmp/spdk.key-sha384.hfr 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key2 /tmp/spdk.key-sha256.6FN 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-sha256.Rkb ]] 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey2 /tmp/spdk.key-sha256.Rkb 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key3 /tmp/spdk.key-sha384.Hyy 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n /tmp/spdk.key-null.arR ]] 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # rpc_cmd keyring_file_add_key ckey3 /tmp/spdk.key-null.arR 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@80 -- # for i in "${!keys[@]}" 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@81 -- # rpc_cmd keyring_file_add_key key4 /tmp/spdk.key-sha512.hsW 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@82 -- # [[ -n '' ]] 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@85 -- # nvmet_auth_init 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # get_main_ns_ip 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@35 -- # configure_kernel_target nqn.2024-02.io.spdk:cnode0 10.0.0.1 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@632 -- # local kernel_name=nqn.2024-02.io.spdk:cnode0 kernel_target_ip=10.0.0.1 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@639 -- # local block nvme 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@642 -- # modprobe nvmet 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:25:27.639 12:11:14 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:25:30.173 Waiting for block devices as requested 00:25:30.173 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:25:30.173 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:30.173 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:30.432 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:30.432 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:30.432 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:30.432 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:30.691 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:30.691 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:30.691 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:25:30.691 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:25:30.950 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:25:30.950 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:25:30.950 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:25:31.270 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:25:31.270 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:25:31.270 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:25:31.838 No valid GPT data, bailing 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@391 -- # pt= 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- scripts/common.sh@392 -- # return 1 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@665 -- # echo SPDK-nqn.2024-02.io.spdk:cnode0 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@667 -- # echo 1 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@669 -- # echo 1 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@672 -- # echo tcp 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@673 -- # echo 4420 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@674 -- # echo ipv4 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 /sys/kernel/config/nvmet/ports/1/subsystems/ 00:25:31.838 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:25:31.838 00:25:31.838 Discovery Log Number of Records 2, Generation counter 2 00:25:31.838 =====Discovery Log Entry 0====== 00:25:31.838 trtype: tcp 00:25:31.838 adrfam: ipv4 00:25:31.838 subtype: current discovery subsystem 00:25:31.838 treq: not specified, sq flow control disable supported 00:25:31.838 portid: 1 00:25:31.838 trsvcid: 4420 00:25:31.838 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:25:31.838 traddr: 10.0.0.1 00:25:31.838 eflags: none 00:25:31.838 sectype: none 00:25:31.838 =====Discovery Log Entry 1====== 00:25:31.838 trtype: tcp 00:25:31.838 adrfam: ipv4 00:25:31.838 subtype: nvme subsystem 00:25:31.838 treq: not specified, sq flow control disable supported 00:25:31.838 portid: 1 00:25:31.839 trsvcid: 4420 00:25:31.839 subnqn: nqn.2024-02.io.spdk:cnode0 00:25:31.839 traddr: 10.0.0.1 00:25:31.839 eflags: none 00:25:31.839 sectype: none 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@36 -- # mkdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@37 -- # echo 0 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@38 -- # ln -s /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@88 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: ]] 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s sha256,sha384,sha512 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # IFS=, 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@94 -- # printf %s ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@93 -- # connect_authenticate sha256,sha384,sha512 ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 1 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256,sha384,sha512 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256,sha384,sha512 --dhchap-dhgroups ffdhe2048,ffdhe3072,ffdhe4096,ffdhe6144,ffdhe8192 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.839 12:11:18 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:31.839 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:31.839 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:31.839 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:31.839 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:31.839 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:31.839 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:31.839 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:31.839 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:31.839 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:31.839 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:31.839 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:31.839 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:31.839 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:31.839 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:31.839 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.098 nvme0n1 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 0 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: ]] 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 0 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.098 nvme0n1 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.098 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.357 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: ]] 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 1 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.358 nvme0n1 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 2 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: ]] 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 2 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.358 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.618 nvme0n1 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 3 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: ]] 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 3 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.618 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.877 nvme0n1 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe2048 4 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe2048 4 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.877 12:11:19 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:32.877 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:32.877 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:32.877 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:32.877 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:32.877 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:32.877 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:32.877 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:32.877 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:32.877 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:32.877 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:32.877 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:32.877 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:32.877 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:32.877 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:32.877 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.137 nvme0n1 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 0 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: ]] 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 0 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.137 nvme0n1 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.137 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 1 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: ]] 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 1 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.397 nvme0n1 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.397 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 2 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: ]] 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 2 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.657 nvme0n1 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 3 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:33.657 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: ]] 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 3 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.658 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.918 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.918 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.918 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.918 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.918 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.918 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.918 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.918 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.918 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:33.918 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.918 12:11:20 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.918 nvme0n1 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe3072 4 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe3072 4 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe3072 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:33.918 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.178 nvme0n1 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 0 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: ]] 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 0 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.178 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.179 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.179 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.179 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.179 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.179 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.179 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.179 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:34.179 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.179 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.438 nvme0n1 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 1 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: ]] 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 1 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.438 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.698 nvme0n1 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 2 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:34.698 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: ]] 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 2 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.957 12:11:21 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.957 nvme0n1 00:25:34.957 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:34.957 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:34.957 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:34.957 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:34.957 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:34.957 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 3 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: ]] 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 3 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:35.216 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.217 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.217 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.217 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.217 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.217 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.217 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.217 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.217 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.217 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.217 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.217 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.217 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.217 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.217 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:35.217 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.217 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.476 nvme0n1 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe4096 4 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe4096 4 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe4096 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.476 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.735 nvme0n1 00:25:35.735 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.735 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 0 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: ]] 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 0 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.736 12:11:22 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.995 nvme0n1 00:25:35.995 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.995 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:35.995 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.995 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:35.995 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:35.995 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:35.995 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:35.995 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:35.995 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:35.995 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 1 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: ]] 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 1 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.254 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.255 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:36.255 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.255 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.514 nvme0n1 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 2 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: ]] 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 2 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:36.514 12:11:23 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.082 nvme0n1 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 3 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: ]] 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 3 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.082 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.341 nvme0n1 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe6144 4 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe6144 4 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe6144 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.341 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.909 nvme0n1 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 0 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: ]] 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 0 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:37.909 12:11:24 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.477 nvme0n1 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 1 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: ]] 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 1 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:38.477 12:11:25 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.046 nvme0n1 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 2 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: ]] 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 2 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.046 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.615 nvme0n1 00:25:39.615 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.615 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:39.615 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:39.615 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 3 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: ]] 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 3 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:39.616 12:11:26 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.184 nvme0n1 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha256 ffdhe8192 4 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha256 ffdhe8192 4 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha256 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe8192 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.184 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.751 nvme0n1 00:25:40.751 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.751 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:40.751 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.751 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:40.751 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:40.751 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:40.751 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:40.751 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:40.751 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:40.751 12:11:27 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 0 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: ]] 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 0 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.011 nvme0n1 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 1 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.011 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: ]] 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 1 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.012 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.271 nvme0n1 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 2 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: ]] 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 2 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.271 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.531 nvme0n1 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 3 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: ]] 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 3 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.531 nvme0n1 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.531 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe2048 4 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe2048 4 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe2048 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.791 nvme0n1 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 0 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:41.791 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:41.792 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: ]] 00:25:41.792 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:41.792 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 0 00:25:41.792 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:41.792 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:41.792 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:41.792 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:41.792 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:41.792 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:41.792 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.792 12:11:28 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:41.792 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:41.792 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:41.792 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:41.792 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:41.792 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:41.792 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:41.792 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:41.792 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:41.792 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:41.792 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:41.792 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:41.792 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:41.792 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:41.792 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:41.792 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.051 nvme0n1 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 1 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: ]] 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 1 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.051 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.310 nvme0n1 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 2 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: ]] 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 2 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.311 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.570 nvme0n1 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 3 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: ]] 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 3 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.570 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:42.571 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.571 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.830 nvme0n1 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe3072 4 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe3072 4 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe3072 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:42.830 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:42.831 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:42.831 12:11:29 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.090 nvme0n1 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 0 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: ]] 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 0 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.090 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.349 nvme0n1 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 1 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: ]] 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 1 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.349 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.350 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.350 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.350 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.350 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.350 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:43.350 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.350 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.609 nvme0n1 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 2 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: ]] 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 2 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.609 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.868 nvme0n1 00:25:43.868 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.868 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:43.868 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:43.868 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.868 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.868 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.868 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:43.868 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:43.868 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.868 12:11:30 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.868 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.868 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:43.868 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 3 00:25:43.868 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:43.868 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:43.868 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:43.868 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:43.868 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:43.868 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:43.868 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:43.868 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:43.868 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:43.868 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: ]] 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 3 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:43.869 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.127 nvme0n1 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe4096 4 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe4096 4 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe4096 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.127 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.386 nvme0n1 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 0 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: ]] 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 0 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.386 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.954 nvme0n1 00:25:44.954 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.954 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:44.954 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.954 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.954 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:44.954 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.954 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:44.954 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:44.954 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.954 12:11:31 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 1 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: ]] 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 1 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:44.954 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.213 nvme0n1 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 2 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:45.213 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: ]] 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 2 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.214 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.782 nvme0n1 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 3 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: ]] 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 3 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:45.782 12:11:32 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.040 nvme0n1 00:25:46.040 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.040 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.040 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.040 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.040 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.040 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.040 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.040 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.040 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.040 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.040 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe6144 4 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe6144 4 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe6144 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.041 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.300 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.300 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.300 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.300 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.300 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.300 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.300 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.300 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.300 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.300 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.300 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.300 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.300 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:46.300 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.300 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.559 nvme0n1 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 0 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: ]] 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 0 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:46.559 12:11:33 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.159 nvme0n1 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 1 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: ]] 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 1 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.159 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.160 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.728 nvme0n1 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 2 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: ]] 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 2 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.728 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:47.988 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:47.988 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:47.988 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:47.988 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:47.988 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:47.988 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:47.988 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:47.988 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:47.988 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:47.988 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:47.988 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:47.988 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:47.988 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:47.988 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:47.988 12:11:34 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.570 nvme0n1 00:25:48.570 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.570 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:48.570 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.570 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:48.570 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.570 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.570 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:48.570 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:48.570 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 3 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: ]] 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 3 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:48.571 12:11:35 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.140 nvme0n1 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha384 ffdhe8192 4 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha384 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha384)' 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha384 ffdhe8192 4 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha384 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha384 --dhchap-dhgroups ffdhe8192 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.140 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.141 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.141 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.141 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.141 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.141 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.141 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.141 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:49.141 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.141 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.709 nvme0n1 00:25:49.709 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.709 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.709 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.709 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.709 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.709 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.709 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.709 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.709 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.709 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.709 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.709 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@100 -- # for digest in "${digests[@]}" 00:25:49.709 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:49.709 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.709 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 0 00:25:49.709 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.709 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: ]] 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 0 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.710 nvme0n1 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.710 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 1 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: ]] 00:25:49.969 12:11:36 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:49.969 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 1 00:25:49.969 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.969 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.969 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:49.969 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:49.969 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.969 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:49.969 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.969 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.969 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.969 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.969 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.969 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.969 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.970 nvme0n1 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 2 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: ]] 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 2 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:49.970 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.229 nvme0n1 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 3 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: ]] 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 3 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.229 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.488 nvme0n1 00:25:50.488 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.488 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.488 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.488 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.488 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.488 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.488 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.488 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.488 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.488 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.488 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.488 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.488 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe2048 4 00:25:50.488 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.488 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe2048 4 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe2048 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe2048 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.489 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.748 nvme0n1 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 0 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: ]] 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 0 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.748 nvme0n1 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:50.748 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.008 12:11:37 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 1 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: ]] 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 1 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.008 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.009 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.009 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.009 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.009 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.009 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.009 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:51.009 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.009 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.009 nvme0n1 00:25:51.009 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.009 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.009 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.009 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.009 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.009 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.009 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.009 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.009 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.009 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 2 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: ]] 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 2 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.269 nvme0n1 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 3 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: ]] 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 3 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.269 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.528 nvme0n1 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe3072 4 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe3072 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe3072 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe3072 4 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe3072 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe3072 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.528 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.529 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.529 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.529 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.529 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.529 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:51.529 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.529 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.788 nvme0n1 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 0 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: ]] 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 0 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:51.788 12:11:38 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.047 nvme0n1 00:25:52.047 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.047 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.047 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.047 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.047 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.047 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 1 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: ]] 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 1 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.048 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.307 nvme0n1 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 2 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: ]] 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 2 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:52.307 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.308 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.567 nvme0n1 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 3 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: ]] 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 3 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:52.567 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:52.826 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:52.826 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.826 12:11:39 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.826 nvme0n1 00:25:52.826 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:52.826 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:52.826 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:52.826 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:52.826 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:52.826 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe4096 4 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe4096 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe4096 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe4096 4 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe4096 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:53.084 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.085 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe4096 00:25:53.085 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.085 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.085 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.085 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.085 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.085 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.085 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.085 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.085 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.085 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.085 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.085 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.085 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.085 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.085 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:53.085 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.085 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.343 nvme0n1 00:25:53.343 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.343 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.343 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.343 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.343 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.343 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.343 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.343 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.343 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.343 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.343 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.343 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:53.343 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.343 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 0 00:25:53.343 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.343 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.343 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: ]] 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 0 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.344 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.602 nvme0n1 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 1 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:53.602 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:53.603 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:53.603 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:53.603 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: ]] 00:25:53.603 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:53.603 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 1 00:25:53.603 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:53.603 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:53.603 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:53.603 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:53.603 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:53.603 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:53.603 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.603 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:53.862 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:53.862 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:53.862 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:53.862 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:53.862 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:53.862 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:53.862 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:53.862 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:53.862 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:53.862 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:53.862 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:53.862 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:53.862 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:53.862 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:53.862 12:11:40 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.122 nvme0n1 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 2 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: ]] 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 2 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.122 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.691 nvme0n1 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 3 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:54.691 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: ]] 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 3 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.692 12:11:41 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.951 nvme0n1 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe6144 4 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe6144 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe6144 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe6144 4 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe6144 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe6144 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:54.951 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:54.952 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:54.952 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:54.952 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:54.952 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:54.952 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.519 nvme0n1 00:25:55.519 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.519 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:55.519 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:55.519 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.519 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@101 -- # for dhgroup in "${dhgroups[@]}" 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 0 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=0 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZjYwM2Q3Yzc5NTM1YTgwNzcyOGI5YzQ4MmRhY2FiMTEhSD+J: 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: ]] 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:03:NTE3MWNjZDBhNWJkZmZjNWY2ODgxN2ZiMWZiMmYyOThkMzc4MjMyYjkxYTNmM2Q4MjViNWUyMzNlNDkxMTE5YiHM3ik=: 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 0 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=0 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key0 --dhchap-ctrlr-key ckey0 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:55.520 12:11:42 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.089 nvme0n1 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 1 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: ]] 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 1 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=1 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey1 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.090 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.656 nvme0n1 00:25:56.656 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.656 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:56.656 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:56.656 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.656 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.656 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.656 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:56.656 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:56.656 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.656 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.656 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.656 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:56.656 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 2 00:25:56.656 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:56.656 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=2 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:01:YmE2NDJhZWU0NGJjZWIwZGNmOGQ5MTMzMmExZWE4OWOkvknY: 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: ]] 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:01:MmJhYTIxZDk5YmQyOTIyNTA3NDQ1NGMyMDVkYzFlYTO+7zNb: 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 2 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=2 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 --dhchap-ctrlr-key ckey2 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:56.657 12:11:43 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.225 nvme0n1 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 3 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=3 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:02:MGUyMTE2NzQ2OGE2ZTQ3OThlOWQxNzlhMTg3N2YxYWQ0ZTg5NzM1YWJhOTYzODMxVJCVXw==: 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: ]] 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:00:ZDQxMTRkZGQ4ZjMxZmI1N2U1YzlhNmFlYWY3NzIzNGTGPbmS: 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 3 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=3 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key3 --dhchap-ctrlr-key ckey3 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.225 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.793 nvme0n1 00:25:57.793 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.793 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:57.793 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:57.793 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.793 12:11:44 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:57.793 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:57.794 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:57.794 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:57.794 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:57.794 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@102 -- # for keyid in "${!keys[@]}" 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@103 -- # nvmet_auth_set_key sha512 ffdhe8192 4 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha512 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe8192 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=4 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey= 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha512)' 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe8192 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:03:MjUyNGU0ZTEwNjUxNzU0NjQwNjI1MjM4NTFjZGYwYWJjMzI3MzZlYWExZDA4MWQzZmI3NjBmNjBkYWY4OWNhNPg1fPg=: 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z '' ]] 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@104 -- # connect_authenticate sha512 ffdhe8192 4 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@55 -- # local digest dhgroup keyid ckey 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # digest=sha512 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # dhgroup=ffdhe8192 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@57 -- # keyid=4 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@58 -- # ckey=(${ckeys[keyid]:+--dhchap-ctrlr-key "ckey${keyid}"}) 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@60 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha512 --dhchap-dhgroups ffdhe8192 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # get_main_ns_ip 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@61 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key4 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.053 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.621 nvme0n1 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # jq -r '.[].name' 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@64 -- # [[ nvme0 == \n\v\m\e\0 ]] 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@65 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@110 -- # nvmet_auth_set_key sha256 ffdhe2048 1 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@42 -- # local digest dhgroup keyid key ckey 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # digest=sha256 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # dhgroup=ffdhe2048 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@44 -- # keyid=1 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@45 -- # key=DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@46 -- # ckey=DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@48 -- # echo 'hmac(sha256)' 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@49 -- # echo ffdhe2048 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@50 -- # echo DHHC-1:00:ZDcyMjIyMjI2OWEyMjliM2NhNGFkNzIzYzY3OTljZGVjMDJjOWU5ZjBlMzY0OWEw92RSJw==: 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # [[ -z DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: ]] 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@51 -- # echo DHHC-1:02:MDZjOGEwZjBmYzJmMmNiMGZjZWI4YzY4YTIyYzg1ZTIzMmYyMGFhNjZkY2UwN2Zje3M9WQ==: 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@111 -- # rpc_cmd bdev_nvme_set_options --dhchap-digests sha256 --dhchap-dhgroups ffdhe2048 00:25:58.621 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # get_main_ns_ip 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@112 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.622 request: 00:25:58.622 { 00:25:58.622 "name": "nvme0", 00:25:58.622 "trtype": "tcp", 00:25:58.622 "traddr": "10.0.0.1", 00:25:58.622 "adrfam": "ipv4", 00:25:58.622 "trsvcid": "4420", 00:25:58.622 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:58.622 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:58.622 "prchk_reftag": false, 00:25:58.622 "prchk_guard": false, 00:25:58.622 "hdgst": false, 00:25:58.622 "ddgst": false, 00:25:58.622 "method": "bdev_nvme_attach_controller", 00:25:58.622 "req_id": 1 00:25:58.622 } 00:25:58.622 Got JSON-RPC error response 00:25:58.622 response: 00:25:58.622 { 00:25:58.622 "code": -5, 00:25:58.622 "message": "Input/output error" 00:25:58.622 } 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # jq length 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@114 -- # (( 0 == 0 )) 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # get_main_ns_ip 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@117 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key2 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.622 request: 00:25:58.622 { 00:25:58.622 "name": "nvme0", 00:25:58.622 "trtype": "tcp", 00:25:58.622 "traddr": "10.0.0.1", 00:25:58.622 "adrfam": "ipv4", 00:25:58.622 "trsvcid": "4420", 00:25:58.622 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:58.622 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:58.622 "prchk_reftag": false, 00:25:58.622 "prchk_guard": false, 00:25:58.622 "hdgst": false, 00:25:58.622 "ddgst": false, 00:25:58.622 "dhchap_key": "key2", 00:25:58.622 "method": "bdev_nvme_attach_controller", 00:25:58.622 "req_id": 1 00:25:58.622 } 00:25:58.622 Got JSON-RPC error response 00:25:58.622 response: 00:25:58.622 { 00:25:58.622 "code": -5, 00:25:58.622 "message": "Input/output error" 00:25:58.622 } 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # rpc_cmd bdev_nvme_get_controllers 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # jq length 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@120 -- # (( 0 == 0 )) 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # get_main_ns_ip 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@741 -- # local ip 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # ip_candidates=() 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@742 -- # local -A ip_candidates 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:25:58.622 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@123 -- # NOT rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@648 -- # local es=0 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -f ipv4 -a 10.0.0.1 -s 4420 -q nqn.2024-02.io.spdk:host0 -n nqn.2024-02.io.spdk:cnode0 --dhchap-key key1 --dhchap-ctrlr-key ckey2 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@559 -- # xtrace_disable 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:25:58.882 request: 00:25:58.882 { 00:25:58.882 "name": "nvme0", 00:25:58.882 "trtype": "tcp", 00:25:58.882 "traddr": "10.0.0.1", 00:25:58.882 "adrfam": "ipv4", 00:25:58.882 "trsvcid": "4420", 00:25:58.882 "subnqn": "nqn.2024-02.io.spdk:cnode0", 00:25:58.882 "hostnqn": "nqn.2024-02.io.spdk:host0", 00:25:58.882 "prchk_reftag": false, 00:25:58.882 "prchk_guard": false, 00:25:58.882 "hdgst": false, 00:25:58.882 "ddgst": false, 00:25:58.882 "dhchap_key": "key1", 00:25:58.882 "dhchap_ctrlr_key": "ckey2", 00:25:58.882 "method": "bdev_nvme_attach_controller", 00:25:58.882 "req_id": 1 00:25:58.882 } 00:25:58.882 Got JSON-RPC error response 00:25:58.882 response: 00:25:58.882 { 00:25:58.882 "code": -5, 00:25:58.882 "message": "Input/output error" 00:25:58.882 } 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@651 -- # es=1 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@127 -- # trap - SIGINT SIGTERM EXIT 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@128 -- # cleanup 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@24 -- # nvmftestfini 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@488 -- # nvmfcleanup 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@117 -- # sync 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@120 -- # set +e 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@121 -- # for i in {1..20} 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:25:58.882 rmmod nvme_tcp 00:25:58.882 rmmod nvme_fabrics 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@124 -- # set -e 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@125 -- # return 0 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@489 -- # '[' -n 456284 ']' 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@490 -- # killprocess 456284 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@948 -- # '[' -z 456284 ']' 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@952 -- # kill -0 456284 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # uname 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:25:58.882 12:11:45 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 456284 00:25:58.882 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:25:58.882 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:25:58.882 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@966 -- # echo 'killing process with pid 456284' 00:25:58.882 killing process with pid 456284 00:25:58.882 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@967 -- # kill 456284 00:25:58.882 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@972 -- # wait 456284 00:25:59.142 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:25:59.142 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:25:59.142 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:25:59.142 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:25:59.142 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@278 -- # remove_spdk_ns 00:25:59.143 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:25:59.143 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:25:59.143 12:11:46 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:01.047 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:01.047 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@25 -- # rm /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/allowed_hosts/nqn.2024-02.io.spdk:host0 00:26:01.047 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@26 -- # rmdir /sys/kernel/config/nvmet/hosts/nqn.2024-02.io.spdk:host0 00:26:01.047 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@27 -- # clean_kernel_target 00:26:01.047 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 ]] 00:26:01.047 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@686 -- # echo 0 00:26:01.047 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:01.047 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0/namespaces/1 00:26:01.047 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:26:01.047 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2024-02.io.spdk:cnode0 00:26:01.047 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:26:01.047 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:26:01.306 12:11:48 nvmf_tcp.nvmf_host.nvmf_auth_host -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:03.848 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:03.848 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:03.848 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:03.848 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:03.848 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:03.848 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:03.848 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:03.848 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:03.848 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:26:03.848 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:26:03.848 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:26:03.848 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:26:03.848 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:26:03.848 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:26:03.848 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:26:03.848 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:26:04.848 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:26:04.848 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@28 -- # rm -f /tmp/spdk.key-null.dxQ /tmp/spdk.key-null.sme /tmp/spdk.key-sha256.6FN /tmp/spdk.key-sha384.Hyy /tmp/spdk.key-sha512.hsW /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/nvme-auth.log 00:26:04.848 12:11:51 nvmf_tcp.nvmf_host.nvmf_auth_host -- host/auth.sh@31 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:26:07.396 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:26:07.396 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:26:07.396 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:26:07.396 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:26:07.396 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:26:07.396 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:26:07.396 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:26:07.396 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:26:07.396 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:26:07.396 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:26:07.396 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:26:07.396 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:26:07.396 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:26:07.396 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:26:07.396 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:26:07.396 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:26:07.396 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:26:07.396 00:26:07.396 real 0m46.565s 00:26:07.396 user 0m40.962s 00:26:07.396 sys 0m11.143s 00:26:07.396 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:07.396 12:11:54 nvmf_tcp.nvmf_host.nvmf_auth_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.396 ************************************ 00:26:07.396 END TEST nvmf_auth_host 00:26:07.396 ************************************ 00:26:07.396 12:11:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:26:07.396 12:11:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@32 -- # [[ tcp == \t\c\p ]] 00:26:07.396 12:11:54 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@33 -- # run_test nvmf_digest /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:07.396 12:11:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:07.396 12:11:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:07.396 12:11:54 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:07.396 ************************************ 00:26:07.396 START TEST nvmf_digest 00:26:07.396 ************************************ 00:26:07.396 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/digest.sh --transport=tcp 00:26:07.656 * Looking for test storage... 00:26:07.656 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@12 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # uname -s 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@5 -- # export PATH 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@47 -- # : 0 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@14 -- # nqn=nqn.2016-06.io.spdk:cnode1 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@15 -- # bperfsock=/var/tmp/bperf.sock 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@16 -- # runtime=2 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@136 -- # [[ tcp != \t\c\p ]] 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@138 -- # nvmftestinit 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@285 -- # xtrace_disable 00:26:07.656 12:11:54 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:12.931 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # pci_devs=() 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # net_devs=() 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # e810=() 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@296 -- # local -ga e810 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # x722=() 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@297 -- # local -ga x722 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # mlx=() 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@298 -- # local -ga mlx 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:12.932 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:12.932 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:12.932 Found net devices under 0000:86:00.0: cvl_0_0 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:12.932 Found net devices under 0000:86:00.1: cvl_0_1 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@414 -- # is_hw=yes 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:12.932 12:11:59 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:12.932 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:12.932 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:12.932 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:12.932 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:12.932 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:12.932 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.171 ms 00:26:12.932 00:26:12.932 --- 10.0.0.2 ping statistics --- 00:26:12.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.932 rtt min/avg/max/mdev = 0.171/0.171/0.171/0.000 ms 00:26:12.932 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:12.932 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:12.932 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.285 ms 00:26:12.932 00:26:12.932 --- 10.0.0.1 ping statistics --- 00:26:12.932 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:12.932 rtt min/avg/max/mdev = 0.285/0.285/0.285/0.000 ms 00:26:12.933 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:12.933 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@422 -- # return 0 00:26:12.933 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:12.933 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:12.933 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:12.933 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:12.933 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:12.933 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:12.933 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:12.933 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@140 -- # trap cleanup SIGINT SIGTERM EXIT 00:26:12.933 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@141 -- # [[ 0 -eq 1 ]] 00:26:12.933 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@145 -- # run_test nvmf_digest_clean run_digest 00:26:12.933 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:12.933 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:12.933 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:13.192 ************************************ 00:26:13.192 START TEST nvmf_digest_clean 00:26:13.192 ************************************ 00:26:13.192 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1123 -- # run_digest 00:26:13.192 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@120 -- # local dsa_initiator 00:26:13.192 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # [[ '' == \d\s\a\_\i\n\i\t\i\a\t\o\r ]] 00:26:13.192 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@121 -- # dsa_initiator=false 00:26:13.192 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@123 -- # tgt_params=("--wait-for-rpc") 00:26:13.192 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@124 -- # nvmfappstart --wait-for-rpc 00:26:13.193 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:13.193 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:13.193 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:13.193 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@481 -- # nvmfpid=468882 00:26:13.193 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@482 -- # waitforlisten 468882 00:26:13.193 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:13.193 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 468882 ']' 00:26:13.193 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:13.193 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:13.193 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:13.193 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:13.193 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:13.193 12:12:00 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:13.193 [2024-07-25 12:12:00.242758] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:26:13.193 [2024-07-25 12:12:00.242800] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:13.193 EAL: No free 2048 kB hugepages reported on node 1 00:26:13.193 [2024-07-25 12:12:00.300619] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:13.193 [2024-07-25 12:12:00.380530] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:13.193 [2024-07-25 12:12:00.380561] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:13.193 [2024-07-25 12:12:00.380569] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:13.193 [2024-07-25 12:12:00.380575] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:13.193 [2024-07-25 12:12:00.380581] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:13.193 [2024-07-25 12:12:00.380597] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@125 -- # [[ '' == \d\s\a\_\t\a\r\g\e\t ]] 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@126 -- # common_target_config 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@43 -- # rpc_cmd 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:14.131 null0 00:26:14.131 [2024-07-25 12:12:01.176370] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:14.131 [2024-07-25 12:12:01.200540] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@128 -- # run_bperf randread 4096 128 false 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=469176 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 469176 /var/tmp/bperf.sock 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 469176 ']' 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:14.131 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:14.131 12:12:01 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:14.131 [2024-07-25 12:12:01.251506] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:26:14.131 [2024-07-25 12:12:01.251547] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469176 ] 00:26:14.131 EAL: No free 2048 kB hugepages reported on node 1 00:26:14.131 [2024-07-25 12:12:01.306081] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:14.390 [2024-07-25 12:12:01.387588] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:14.959 12:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:14.959 12:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:14.959 12:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:14.959 12:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:14.959 12:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:15.219 12:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.219 12:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:15.478 nvme0n1 00:26:15.478 12:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:15.478 12:12:02 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:15.737 Running I/O for 2 seconds... 00:26:17.646 00:26:17.646 Latency(us) 00:26:17.646 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.646 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:17.646 nvme0n1 : 2.04 26011.31 101.61 0.00 0.00 4849.15 2535.96 45362.31 00:26:17.646 =================================================================================================================== 00:26:17.646 Total : 26011.31 101.61 0.00 0.00 4849.15 2535.96 45362.31 00:26:17.646 0 00:26:17.646 12:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:17.647 12:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:17.647 12:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:17.647 12:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:17.647 | select(.opcode=="crc32c") 00:26:17.647 | "\(.module_name) \(.executed)"' 00:26:17.647 12:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:17.906 12:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:17.906 12:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:17.906 12:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:17.906 12:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:17.906 12:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 469176 00:26:17.906 12:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 469176 ']' 00:26:17.906 12:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 469176 00:26:17.906 12:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:17.906 12:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:17.906 12:12:04 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 469176 00:26:17.906 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:17.906 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:17.906 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 469176' 00:26:17.906 killing process with pid 469176 00:26:17.906 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 469176 00:26:17.906 Received shutdown signal, test time was about 2.000000 seconds 00:26:17.906 00:26:17.906 Latency(us) 00:26:17.907 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:17.907 =================================================================================================================== 00:26:17.907 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:17.907 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 469176 00:26:18.167 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@129 -- # run_bperf randread 131072 16 false 00:26:18.167 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:18.167 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:18.167 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randread 00:26:18.167 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:18.167 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:18.167 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:18.167 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=469926 00:26:18.167 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 469926 /var/tmp/bperf.sock 00:26:18.167 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:18.167 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 469926 ']' 00:26:18.167 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:18.167 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:18.167 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:18.167 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:18.167 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:18.167 12:12:05 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:18.167 [2024-07-25 12:12:05.254728] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:26:18.167 [2024-07-25 12:12:05.254775] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid469926 ] 00:26:18.167 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:18.167 Zero copy mechanism will not be used. 00:26:18.167 EAL: No free 2048 kB hugepages reported on node 1 00:26:18.167 [2024-07-25 12:12:05.307546] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:18.167 [2024-07-25 12:12:05.379957] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:19.106 12:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:19.106 12:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:19.106 12:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:19.106 12:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:19.106 12:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:19.106 12:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.106 12:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:19.676 nvme0n1 00:26:19.676 12:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:19.676 12:12:06 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:19.676 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:19.676 Zero copy mechanism will not be used. 00:26:19.676 Running I/O for 2 seconds... 00:26:21.582 00:26:21.582 Latency(us) 00:26:21.582 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.583 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:21.583 nvme0n1 : 2.00 2119.11 264.89 0.00 0.00 7548.68 6724.56 27582.11 00:26:21.583 =================================================================================================================== 00:26:21.583 Total : 2119.11 264.89 0.00 0.00 7548.68 6724.56 27582.11 00:26:21.583 0 00:26:21.583 12:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:21.583 12:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:21.583 12:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:21.583 12:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:21.583 | select(.opcode=="crc32c") 00:26:21.583 | "\(.module_name) \(.executed)"' 00:26:21.583 12:12:08 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:21.845 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:21.845 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:21.845 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:21.845 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:21.845 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 469926 00:26:21.845 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 469926 ']' 00:26:21.845 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 469926 00:26:21.845 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:21.845 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:21.845 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 469926 00:26:21.845 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:21.845 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:21.845 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 469926' 00:26:21.845 killing process with pid 469926 00:26:21.845 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 469926 00:26:21.845 Received shutdown signal, test time was about 2.000000 seconds 00:26:21.845 00:26:21.845 Latency(us) 00:26:21.845 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:21.845 =================================================================================================================== 00:26:21.845 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:21.845 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 469926 00:26:22.105 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@130 -- # run_bperf randwrite 4096 128 false 00:26:22.105 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:22.105 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:22.106 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:22.106 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=4096 00:26:22.106 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=128 00:26:22.106 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:22.106 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=470841 00:26:22.106 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 470841 /var/tmp/bperf.sock 00:26:22.106 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z --wait-for-rpc 00:26:22.106 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 470841 ']' 00:26:22.106 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:22.106 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:22.106 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:22.106 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:22.106 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:22.106 12:12:09 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:22.106 [2024-07-25 12:12:09.285888] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:26:22.106 [2024-07-25 12:12:09.285936] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid470841 ] 00:26:22.106 EAL: No free 2048 kB hugepages reported on node 1 00:26:22.106 [2024-07-25 12:12:09.339646] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:22.366 [2024-07-25 12:12:09.422032] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:22.935 12:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:22.935 12:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:22.935 12:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:22.935 12:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:22.935 12:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:23.194 12:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:23.194 12:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:23.453 nvme0n1 00:26:23.453 12:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:23.453 12:12:10 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:23.712 Running I/O for 2 seconds... 00:26:25.624 00:26:25.624 Latency(us) 00:26:25.624 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.624 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:25.624 nvme0n1 : 2.00 25898.86 101.17 0.00 0.00 4933.77 3333.79 31685.23 00:26:25.624 =================================================================================================================== 00:26:25.624 Total : 25898.86 101.17 0.00 0.00 4933.77 3333.79 31685.23 00:26:25.624 0 00:26:25.624 12:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:25.624 12:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:25.624 12:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:25.624 12:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:25.624 12:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:25.624 | select(.opcode=="crc32c") 00:26:25.624 | "\(.module_name) \(.executed)"' 00:26:25.884 12:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:25.884 12:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:25.884 12:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:25.884 12:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:25.884 12:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 470841 00:26:25.884 12:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 470841 ']' 00:26:25.884 12:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 470841 00:26:25.884 12:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:25.884 12:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:25.884 12:12:12 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 470841 00:26:25.884 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:25.884 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:25.884 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 470841' 00:26:25.884 killing process with pid 470841 00:26:25.884 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 470841 00:26:25.884 Received shutdown signal, test time was about 2.000000 seconds 00:26:25.884 00:26:25.885 Latency(us) 00:26:25.885 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:25.885 =================================================================================================================== 00:26:25.885 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:25.885 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 470841 00:26:26.146 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@131 -- # run_bperf randwrite 131072 16 false 00:26:26.146 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@77 -- # local rw bs qd scan_dsa 00:26:26.146 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@78 -- # local acc_module acc_executed exp_module 00:26:26.146 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # rw=randwrite 00:26:26.146 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # bs=131072 00:26:26.146 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # qd=16 00:26:26.146 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@80 -- # scan_dsa=false 00:26:26.146 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@83 -- # bperfpid=471524 00:26:26.146 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@84 -- # waitforlisten 471524 /var/tmp/bperf.sock 00:26:26.146 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@82 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z --wait-for-rpc 00:26:26.146 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@829 -- # '[' -z 471524 ']' 00:26:26.146 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:26.146 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:26.146 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:26.146 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:26.146 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:26.146 12:12:13 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:26.146 [2024-07-25 12:12:13.246622] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:26:26.146 [2024-07-25 12:12:13.246669] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid471524 ] 00:26:26.146 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:26.146 Zero copy mechanism will not be used. 00:26:26.146 EAL: No free 2048 kB hugepages reported on node 1 00:26:26.146 [2024-07-25 12:12:13.300022] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:26.146 [2024-07-25 12:12:13.380670] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:27.153 12:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:27.153 12:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@862 -- # return 0 00:26:27.153 12:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@86 -- # false 00:26:27.153 12:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@87 -- # bperf_rpc framework_start_init 00:26:27.153 12:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:26:27.153 12:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@89 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:27.153 12:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:27.723 nvme0n1 00:26:27.723 12:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@92 -- # bperf_py perform_tests 00:26:27.723 12:12:14 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:27.723 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:27.723 Zero copy mechanism will not be used. 00:26:27.723 Running I/O for 2 seconds... 00:26:29.632 00:26:29.632 Latency(us) 00:26:29.632 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.632 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:29.632 nvme0n1 : 2.01 1362.98 170.37 0.00 0.00 11705.31 9232.03 38523.77 00:26:29.632 =================================================================================================================== 00:26:29.632 Total : 1362.98 170.37 0.00 0.00 11705.31 9232.03 38523.77 00:26:29.632 0 00:26:29.632 12:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # read -r acc_module acc_executed 00:26:29.632 12:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@93 -- # get_accel_stats 00:26:29.632 12:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@36 -- # bperf_rpc accel_get_stats 00:26:29.632 12:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@37 -- # jq -rc '.operations[] 00:26:29.632 | select(.opcode=="crc32c") 00:26:29.632 | "\(.module_name) \(.executed)"' 00:26:29.632 12:12:16 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock accel_get_stats 00:26:29.892 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # false 00:26:29.892 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@94 -- # exp_module=software 00:26:29.892 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@95 -- # (( acc_executed > 0 )) 00:26:29.892 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@96 -- # [[ software == \s\o\f\t\w\a\r\e ]] 00:26:29.892 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@98 -- # killprocess 471524 00:26:29.892 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 471524 ']' 00:26:29.892 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 471524 00:26:29.892 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:29.892 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:29.892 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 471524 00:26:29.892 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:29.892 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:29.892 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 471524' 00:26:29.892 killing process with pid 471524 00:26:29.892 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 471524 00:26:29.892 Received shutdown signal, test time was about 2.000000 seconds 00:26:29.892 00:26:29.892 Latency(us) 00:26:29.892 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:29.892 =================================================================================================================== 00:26:29.892 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:29.892 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 471524 00:26:30.152 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- host/digest.sh@132 -- # killprocess 468882 00:26:30.152 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@948 -- # '[' -z 468882 ']' 00:26:30.152 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@952 -- # kill -0 468882 00:26:30.152 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # uname 00:26:30.152 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:30.152 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 468882 00:26:30.152 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:30.152 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:30.152 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@966 -- # echo 'killing process with pid 468882' 00:26:30.152 killing process with pid 468882 00:26:30.152 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@967 -- # kill 468882 00:26:30.152 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@972 -- # wait 468882 00:26:30.412 00:26:30.412 real 0m17.262s 00:26:30.412 user 0m34.337s 00:26:30.412 sys 0m3.333s 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_clean -- common/autotest_common.sh@10 -- # set +x 00:26:30.412 ************************************ 00:26:30.412 END TEST nvmf_digest_clean 00:26:30.412 ************************************ 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@147 -- # run_test nvmf_digest_error run_digest_error 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:30.412 ************************************ 00:26:30.412 START TEST nvmf_digest_error 00:26:30.412 ************************************ 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1123 -- # run_digest_error 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@102 -- # nvmfappstart --wait-for-rpc 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@481 -- # nvmfpid=472273 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@482 -- # waitforlisten 472273 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF --wait-for-rpc 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 472273 ']' 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:30.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:30.412 12:12:17 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:30.412 [2024-07-25 12:12:17.578654] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:26:30.412 [2024-07-25 12:12:17.578695] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:30.412 EAL: No free 2048 kB hugepages reported on node 1 00:26:30.412 [2024-07-25 12:12:17.634645] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:30.672 [2024-07-25 12:12:17.715478] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:30.672 [2024-07-25 12:12:17.715513] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:30.672 [2024-07-25 12:12:17.715521] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:30.672 [2024-07-25 12:12:17.715527] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:30.672 [2024-07-25 12:12:17.715532] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:30.672 [2024-07-25 12:12:17.715548] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:31.242 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:31.242 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:31.242 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:31.242 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:31.242 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:31.242 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:31.242 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@104 -- # rpc_cmd accel_assign_opc -o crc32c -m error 00:26:31.242 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.242 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:31.242 [2024-07-25 12:12:18.417576] accel_rpc.c: 167:rpc_accel_assign_opc: *NOTICE*: Operation crc32c will be assigned to module error 00:26:31.242 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.242 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@105 -- # common_target_config 00:26:31.242 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@43 -- # rpc_cmd 00:26:31.242 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:31.242 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:31.502 null0 00:26:31.502 [2024-07-25 12:12:18.506162] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:31.502 [2024-07-25 12:12:18.530330] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:31.502 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:31.502 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@108 -- # run_bperf_err randread 4096 128 00:26:31.502 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:31.502 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:31.502 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:31.502 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:31.502 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=472470 00:26:31.502 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 4096 -t 2 -q 128 -z 00:26:31.502 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 472470 /var/tmp/bperf.sock 00:26:31.502 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 472470 ']' 00:26:31.502 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:31.502 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:31.502 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:31.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:31.502 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:31.502 12:12:18 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:31.502 [2024-07-25 12:12:18.578269] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:26:31.502 [2024-07-25 12:12:18.578310] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid472470 ] 00:26:31.502 EAL: No free 2048 kB hugepages reported on node 1 00:26:31.502 [2024-07-25 12:12:18.631033] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:31.502 [2024-07-25 12:12:18.710033] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:32.439 12:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:32.439 12:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:32.439 12:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.439 12:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:32.439 12:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:32.439 12:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.439 12:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.439 12:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.439 12:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.439 12:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:32.698 nvme0n1 00:26:32.698 12:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:32.698 12:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:32.698 12:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:32.698 12:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:32.698 12:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:32.698 12:12:19 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:32.698 Running I/O for 2 seconds... 00:26:32.958 [2024-07-25 12:12:19.977653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.958 [2024-07-25 12:12:19.977688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:19557 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.958 [2024-07-25 12:12:19.977698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.958 [2024-07-25 12:12:19.987809] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.958 [2024-07-25 12:12:19.987833] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:98 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.958 [2024-07-25 12:12:19.987843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.958 [2024-07-25 12:12:19.998200] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.958 [2024-07-25 12:12:19.998221] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:12341 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.958 [2024-07-25 12:12:19.998230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.958 [2024-07-25 12:12:20.007621] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.958 [2024-07-25 12:12:20.007964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:98 nsid:1 lba:2490 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.958 [2024-07-25 12:12:20.008028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:98 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.958 [2024-07-25 12:12:20.020079] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.958 [2024-07-25 12:12:20.020103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:10322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.958 [2024-07-25 12:12:20.020113] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.958 [2024-07-25 12:12:20.028872] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.958 [2024-07-25 12:12:20.028894] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:8849 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.958 [2024-07-25 12:12:20.028903] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.958 [2024-07-25 12:12:20.039888] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.958 [2024-07-25 12:12:20.039909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:14418 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.958 [2024-07-25 12:12:20.039918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.958 [2024-07-25 12:12:20.050274] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.959 [2024-07-25 12:12:20.050303] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:17141 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.959 [2024-07-25 12:12:20.050317] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.959 [2024-07-25 12:12:20.060526] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.959 [2024-07-25 12:12:20.060548] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:13512 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.959 [2024-07-25 12:12:20.060557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.959 [2024-07-25 12:12:20.071400] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.959 [2024-07-25 12:12:20.071421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:12 nsid:1 lba:4964 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.959 [2024-07-25 12:12:20.071429] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.959 [2024-07-25 12:12:20.079919] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.959 [2024-07-25 12:12:20.079939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22505 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.959 [2024-07-25 12:12:20.079948] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.959 [2024-07-25 12:12:20.090349] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.959 [2024-07-25 12:12:20.090369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:5882 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.959 [2024-07-25 12:12:20.090378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.959 [2024-07-25 12:12:20.104709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.959 [2024-07-25 12:12:20.104729] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:14662 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.959 [2024-07-25 12:12:20.104738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.959 [2024-07-25 12:12:20.115570] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.959 [2024-07-25 12:12:20.115590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:5283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.959 [2024-07-25 12:12:20.115599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.959 [2024-07-25 12:12:20.126953] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.959 [2024-07-25 12:12:20.126976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:10354 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.959 [2024-07-25 12:12:20.126988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.959 [2024-07-25 12:12:20.135584] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.959 [2024-07-25 12:12:20.135605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:16202 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.959 [2024-07-25 12:12:20.135613] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.959 [2024-07-25 12:12:20.147403] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.959 [2024-07-25 12:12:20.147424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:10937 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.959 [2024-07-25 12:12:20.147433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:86 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.959 [2024-07-25 12:12:20.160653] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.959 [2024-07-25 12:12:20.160674] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:11528 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.959 [2024-07-25 12:12:20.160683] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.959 [2024-07-25 12:12:20.169799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.959 [2024-07-25 12:12:20.169819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:9168 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.959 [2024-07-25 12:12:20.169827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.959 [2024-07-25 12:12:20.183594] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.959 [2024-07-25 12:12:20.183615] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:21847 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.959 [2024-07-25 12:12:20.183624] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.959 [2024-07-25 12:12:20.193783] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.959 [2024-07-25 12:12:20.193803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:19840 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.959 [2024-07-25 12:12:20.193812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:32.959 [2024-07-25 12:12:20.203907] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:32.959 [2024-07-25 12:12:20.203927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:32.959 [2024-07-25 12:12:20.203935] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.218 [2024-07-25 12:12:20.213959] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.218 [2024-07-25 12:12:20.213980] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:29 nsid:1 lba:10842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.218 [2024-07-25 12:12:20.213989] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:29 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.218 [2024-07-25 12:12:20.223464] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.218 [2024-07-25 12:12:20.223489] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:72 nsid:1 lba:8322 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.218 [2024-07-25 12:12:20.223498] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.218 [2024-07-25 12:12:20.238026] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.218 [2024-07-25 12:12:20.238052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:23399 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.238061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.249501] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.249521] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:20187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.249529] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.258493] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.258513] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:3284 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.258521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.268396] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.268417] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:17437 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.268425] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.278184] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.278205] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:25003 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.278214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.288288] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.288310] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:70 nsid:1 lba:23659 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.288318] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.296900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.296921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:13941 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.296930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.306787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.306810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:1385 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.306824] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:44 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.316327] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.316350] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:22266 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.316359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.325886] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.325910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:12649 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.325919] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.335735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.335757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:4834 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.335766] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:100 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.346007] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.346028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:12697 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.346036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.355142] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.355162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:18802 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.355171] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.364401] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.364421] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:13115 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.364430] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.374482] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.374503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:15409 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.374511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.384917] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.384938] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:7116 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.384946] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.394752] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.394777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:32 nsid:1 lba:1184 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.394786] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.403921] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.403942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:7227 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.403951] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.414533] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.414555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:18663 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.414563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.424137] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.424172] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:20245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.424182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.433933] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.433953] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:24450 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.433962] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.443649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.443670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:15798 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.443679] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.453169] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.453189] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:6648 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.453199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.219 [2024-07-25 12:12:20.462149] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.219 [2024-07-25 12:12:20.462170] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:19575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.219 [2024-07-25 12:12:20.462178] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.479 [2024-07-25 12:12:20.472722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.479 [2024-07-25 12:12:20.472745] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:21125 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.479 [2024-07-25 12:12:20.472754] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:49 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.479 [2024-07-25 12:12:20.481414] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.479 [2024-07-25 12:12:20.481435] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:17890 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.479 [2024-07-25 12:12:20.481443] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.479 [2024-07-25 12:12:20.491709] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.479 [2024-07-25 12:12:20.491730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:14540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.479 [2024-07-25 12:12:20.491739] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.479 [2024-07-25 12:12:20.501472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.479 [2024-07-25 12:12:20.501493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:1249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.479 [2024-07-25 12:12:20.501503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.479 [2024-07-25 12:12:20.510811] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.479 [2024-07-25 12:12:20.510832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:16776 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.479 [2024-07-25 12:12:20.510840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.479 [2024-07-25 12:12:20.522271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.479 [2024-07-25 12:12:20.522293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:7658 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.479 [2024-07-25 12:12:20.522302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.479 [2024-07-25 12:12:20.530893] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.479 [2024-07-25 12:12:20.530913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:24888 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.479 [2024-07-25 12:12:20.530922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:85 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.479 [2024-07-25 12:12:20.540825] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.479 [2024-07-25 12:12:20.540845] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:15156 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.479 [2024-07-25 12:12:20.540854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.479 [2024-07-25 12:12:20.550456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.479 [2024-07-25 12:12:20.550477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21974 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.479 [2024-07-25 12:12:20.550486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.479 [2024-07-25 12:12:20.560614] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.479 [2024-07-25 12:12:20.560634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:11239 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.479 [2024-07-25 12:12:20.560646] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.479 [2024-07-25 12:12:20.570335] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.479 [2024-07-25 12:12:20.570356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:3968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.479 [2024-07-25 12:12:20.570365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.479 [2024-07-25 12:12:20.579534] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.479 [2024-07-25 12:12:20.579554] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:22085 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.479 [2024-07-25 12:12:20.579562] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.479 [2024-07-25 12:12:20.588465] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.479 [2024-07-25 12:12:20.588485] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:15406 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.479 [2024-07-25 12:12:20.588494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.479 [2024-07-25 12:12:20.598531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.479 [2024-07-25 12:12:20.598552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:1250 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.479 [2024-07-25 12:12:20.598561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.479 [2024-07-25 12:12:20.607168] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.479 [2024-07-25 12:12:20.607188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:8795 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.479 [2024-07-25 12:12:20.607197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.479 [2024-07-25 12:12:20.618148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.479 [2024-07-25 12:12:20.618168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:10961 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.479 [2024-07-25 12:12:20.618176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.479 [2024-07-25 12:12:20.626207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.480 [2024-07-25 12:12:20.626226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:22205 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.480 [2024-07-25 12:12:20.626234] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.480 [2024-07-25 12:12:20.636483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.480 [2024-07-25 12:12:20.636503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:24278 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.480 [2024-07-25 12:12:20.636511] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.480 [2024-07-25 12:12:20.645601] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.480 [2024-07-25 12:12:20.645625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:9455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.480 [2024-07-25 12:12:20.645633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.480 [2024-07-25 12:12:20.655664] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.480 [2024-07-25 12:12:20.655684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:10 nsid:1 lba:4494 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.480 [2024-07-25 12:12:20.655692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:10 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.480 [2024-07-25 12:12:20.664805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.480 [2024-07-25 12:12:20.664826] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:57 nsid:1 lba:7968 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.480 [2024-07-25 12:12:20.664835] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.480 [2024-07-25 12:12:20.674094] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.480 [2024-07-25 12:12:20.674115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:6826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.480 [2024-07-25 12:12:20.674124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.480 [2024-07-25 12:12:20.683986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.480 [2024-07-25 12:12:20.684007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:6351 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.480 [2024-07-25 12:12:20.684016] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.480 [2024-07-25 12:12:20.693244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.480 [2024-07-25 12:12:20.693264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:16666 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.480 [2024-07-25 12:12:20.693273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.480 [2024-07-25 12:12:20.701997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.480 [2024-07-25 12:12:20.702018] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:8295 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.480 [2024-07-25 12:12:20.702026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.480 [2024-07-25 12:12:20.711432] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.480 [2024-07-25 12:12:20.711453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:104 nsid:1 lba:25147 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.480 [2024-07-25 12:12:20.711461] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:104 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.480 [2024-07-25 12:12:20.720387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.480 [2024-07-25 12:12:20.720407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:15362 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.480 [2024-07-25 12:12:20.720415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.739 [2024-07-25 12:12:20.730092] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.739 [2024-07-25 12:12:20.730115] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2057 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.739 [2024-07-25 12:12:20.730123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.739 [2024-07-25 12:12:20.740181] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.739 [2024-07-25 12:12:20.740202] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:20749 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.739 [2024-07-25 12:12:20.740210] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.739 [2024-07-25 12:12:20.748937] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.739 [2024-07-25 12:12:20.748957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:10832 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.748966] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.758722] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.758742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:12030 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.758751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.767055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.767075] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:16090 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.767084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.776994] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.777014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:19712 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.777022] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.786295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.786315] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:1187 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.786323] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:73 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.795586] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.795605] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:21160 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.795614] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.804931] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.804951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:25089 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.804963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.814257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.814277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:107 nsid:1 lba:927 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.814286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:107 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.823727] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.823747] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:105 nsid:1 lba:23005 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.823755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:105 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.832896] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.832917] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:15957 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.832925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.842144] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.842164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:14302 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.842173] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.851371] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.851390] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:19 nsid:1 lba:18869 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.851399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:19 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.861243] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.861263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:5017 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.861271] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.870405] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.870425] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:3531 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.870433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.879998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.880019] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:11971 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.880027] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.888622] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.888642] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:17280 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.888650] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.898159] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.898179] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:12895 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.898187] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.907385] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.907405] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:43 nsid:1 lba:18240 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.907413] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:43 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.916787] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.916807] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:77 nsid:1 lba:25329 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.916815] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:77 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.926278] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.926300] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:17 nsid:1 lba:20143 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.926308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:17 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.936575] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.936595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:12201 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.936604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.945295] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.945316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:17753 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.945324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.954509] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.954529] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:13466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.954537] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.964179] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.964200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:11359 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.964211] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:75 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.972843] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.972864] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:112 nsid:1 lba:17377 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.740 [2024-07-25 12:12:20.972872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:112 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:33.740 [2024-07-25 12:12:20.982359] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:33.740 [2024-07-25 12:12:20.982381] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:10540 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:33.741 [2024-07-25 12:12:20.982390] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:20.992802] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:20.992825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4575 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:20.992834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:21.001746] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:21.001766] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:52 nsid:1 lba:1821 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:21.001775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:21.011191] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:21.011211] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:13 nsid:1 lba:19553 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:21.011219] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:21.021082] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:21.021102] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:9276 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:21.021110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:88 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:21.029777] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:21.029797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:11475 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:21.029805] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:21.038532] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:21.038552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:6375 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:21.038561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:76 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:21.049503] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:21.049527] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:38 nsid:1 lba:12656 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:21.049535] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:21.057792] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:21.057812] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:3 nsid:1 lba:5763 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:21.057821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:21.067433] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:21.067453] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:67 nsid:1 lba:14709 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:21.067462] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:67 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:21.077010] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:21.077029] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:7466 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:21.077037] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:21.085591] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:21.085611] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:6219 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:21.085619] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:21.095381] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:21.095401] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:124 nsid:1 lba:21602 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:21.095409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:124 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:21.104846] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:21.104867] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:16313 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:21.104875] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:21.113986] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:21.114006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:7876 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:21.114014] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:21.123311] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:21.123331] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:7447 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:21.123339] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:21.132678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:21.132699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:14 nsid:1 lba:6321 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:21.132707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:21.142387] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:21.142407] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:4478 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:21.142415] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:21.151853] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:21.151873] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:17922 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:21.151881] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.001 [2024-07-25 12:12:21.160776] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.001 [2024-07-25 12:12:21.160796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:74 nsid:1 lba:3314 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.001 [2024-07-25 12:12:21.160804] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.002 [2024-07-25 12:12:21.170688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.002 [2024-07-25 12:12:21.170708] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:12982 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.002 [2024-07-25 12:12:21.170717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:55 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.002 [2024-07-25 12:12:21.178944] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.002 [2024-07-25 12:12:21.178964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:71 nsid:1 lba:24729 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.002 [2024-07-25 12:12:21.178973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.002 [2024-07-25 12:12:21.190267] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.002 [2024-07-25 12:12:21.190288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20130 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.002 [2024-07-25 12:12:21.190297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.002 [2024-07-25 12:12:21.199472] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.002 [2024-07-25 12:12:21.199493] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:13283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.002 [2024-07-25 12:12:21.199501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.002 [2024-07-25 12:12:21.209024] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.002 [2024-07-25 12:12:21.209049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:14372 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.002 [2024-07-25 12:12:21.209061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:42 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.002 [2024-07-25 12:12:21.217857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.002 [2024-07-25 12:12:21.217877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:99 nsid:1 lba:11463 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.002 [2024-07-25 12:12:21.217886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:99 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.002 [2024-07-25 12:12:21.227956] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.002 [2024-07-25 12:12:21.227977] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:8 nsid:1 lba:2513 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.002 [2024-07-25 12:12:21.227985] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:8 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.002 [2024-07-25 12:12:21.236961] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.002 [2024-07-25 12:12:21.236981] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:22245 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.002 [2024-07-25 12:12:21.236992] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.002 [2024-07-25 12:12:21.246545] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.002 [2024-07-25 12:12:21.246565] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:16163 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.002 [2024-07-25 12:12:21.246573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:62 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.262 [2024-07-25 12:12:21.256423] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.262 [2024-07-25 12:12:21.256445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:6589 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.262 [2024-07-25 12:12:21.256454] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.262 [2024-07-25 12:12:21.264807] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.262 [2024-07-25 12:12:21.264827] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:15404 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.262 [2024-07-25 12:12:21.264836] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.262 [2024-07-25 12:12:21.275458] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.262 [2024-07-25 12:12:21.275478] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:18911 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.262 [2024-07-25 12:12:21.275487] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:106 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.262 [2024-07-25 12:12:21.284688] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.262 [2024-07-25 12:12:21.284709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:123 nsid:1 lba:16455 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.262 [2024-07-25 12:12:21.284717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:123 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.262 [2024-07-25 12:12:21.294308] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.262 [2024-07-25 12:12:21.294335] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:283 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.262 [2024-07-25 12:12:21.294343] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.262 [2024-07-25 12:12:21.302900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.262 [2024-07-25 12:12:21.302920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:21443 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.262 [2024-07-25 12:12:21.302928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.262 [2024-07-25 12:12:21.312148] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.262 [2024-07-25 12:12:21.312168] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:9645 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.262 [2024-07-25 12:12:21.312176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.262 [2024-07-25 12:12:21.321735] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.262 [2024-07-25 12:12:21.321755] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:9366 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.321763] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:63 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.330835] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.330856] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:21619 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.330864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.340217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.340237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:4624 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.340245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.350748] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.350769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:35 nsid:1 lba:14872 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.350777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:35 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.358691] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.358712] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:10117 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.358720] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.368815] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.368835] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:80 nsid:1 lba:18247 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.368843] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:80 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.377627] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.377648] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8708 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.377656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.387517] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.387537] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:19417 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.387546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.396861] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.396881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:21756 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.396890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.406325] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.406346] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:25 nsid:1 lba:23816 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.406354] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:25 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.415555] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.415575] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:10745 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.415583] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.424539] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.424559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:17049 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.424568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.433946] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.433965] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:64 nsid:1 lba:20300 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.433973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:64 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.444059] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.444079] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:14102 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.444087] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:91 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.452649] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.452670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:5567 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.452681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:9 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.463204] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.463224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:2803 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.463232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:102 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.471911] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.471932] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:65 nsid:1 lba:3390 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.471940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.481307] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.481327] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:8900 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.481335] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.491468] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.491488] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:119 nsid:1 lba:8338 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.491496] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.500259] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.500279] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:114 nsid:1 lba:18543 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.500287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:114 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.263 [2024-07-25 12:12:21.509812] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.263 [2024-07-25 12:12:21.509832] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:3079 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.263 [2024-07-25 12:12:21.509840] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.523 [2024-07-25 12:12:21.518892] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.523 [2024-07-25 12:12:21.518913] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:23380 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.523 [2024-07-25 12:12:21.518922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.523 [2024-07-25 12:12:21.528842] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.523 [2024-07-25 12:12:21.528862] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:92 nsid:1 lba:12065 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.523 [2024-07-25 12:12:21.528872] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:92 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.523 [2024-07-25 12:12:21.537804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.523 [2024-07-25 12:12:21.537824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:9095 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.523 [2024-07-25 12:12:21.537832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.523 [2024-07-25 12:12:21.547297] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.523 [2024-07-25 12:12:21.547317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:8524 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.523 [2024-07-25 12:12:21.547326] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:101 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.523 [2024-07-25 12:12:21.556694] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.523 [2024-07-25 12:12:21.556714] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:54 nsid:1 lba:3064 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.523 [2024-07-25 12:12:21.556722] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:54 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.523 [2024-07-25 12:12:21.566373] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.523 [2024-07-25 12:12:21.566393] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:2440 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.523 [2024-07-25 12:12:21.566402] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.523 [2024-07-25 12:12:21.575470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.523 [2024-07-25 12:12:21.575491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:5 nsid:1 lba:5029 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.523 [2024-07-25 12:12:21.575499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.523 [2024-07-25 12:12:21.585678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.585699] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:83 nsid:1 lba:24866 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.585707] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:83 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.594928] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.594948] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:56 nsid:1 lba:5583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.594956] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:56 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.604564] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.604584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:36 nsid:1 lba:24918 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.604592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:36 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.613053] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.613073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:23842 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.613084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.623456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.623476] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:13327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.623484] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:81 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.632062] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.632082] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:12779 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.632090] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.642481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.642502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:16264 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.642510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.652256] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.652277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:2887 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.652285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:89 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.666055] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.666076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:19720 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.666084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.678428] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.678448] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:21327 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.678457] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:33 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.687578] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.687598] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:7819 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.687607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:113 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.696830] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.696851] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:2 nsid:1 lba:2639 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.696859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.706681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.706705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:121 nsid:1 lba:21363 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.706714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.715631] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.715652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:82 nsid:1 lba:20717 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.715660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:82 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.725186] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.725207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:23 nsid:1 lba:14396 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.725216] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:23 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.734804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.734824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:18786 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.734833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:51 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.743098] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.743119] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:12583 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.743127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:30 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.753623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.753645] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:1956 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.753656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:47 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.762573] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.762595] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:11 nsid:1 lba:11242 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.762604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.524 [2024-07-25 12:12:21.772646] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.524 [2024-07-25 12:12:21.772667] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:15515 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.524 [2024-07-25 12:12:21.772676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.784 [2024-07-25 12:12:21.782456] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.784 [2024-07-25 12:12:21.782477] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:16486 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.784 [2024-07-25 12:12:21.782486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.784 [2024-07-25 12:12:21.791290] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.784 [2024-07-25 12:12:21.791311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:18504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.784 [2024-07-25 12:12:21.791319] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:116 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.784 [2024-07-25 12:12:21.802006] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.784 [2024-07-25 12:12:21.802028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:84 nsid:1 lba:15442 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.784 [2024-07-25 12:12:21.802036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.785 [2024-07-25 12:12:21.811100] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.785 [2024-07-25 12:12:21.811120] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:40 nsid:1 lba:21249 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.785 [2024-07-25 12:12:21.811129] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.785 [2024-07-25 12:12:21.820538] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.785 [2024-07-25 12:12:21.820559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:7657 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.785 [2024-07-25 12:12:21.820568] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:59 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.785 [2024-07-25 12:12:21.829856] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.785 [2024-07-25 12:12:21.829878] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:61 nsid:1 lba:21912 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.785 [2024-07-25 12:12:21.829886] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:61 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.785 [2024-07-25 12:12:21.839341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.785 [2024-07-25 12:12:21.839361] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:60 nsid:1 lba:7585 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.785 [2024-07-25 12:12:21.839370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:60 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.785 [2024-07-25 12:12:21.849018] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.785 [2024-07-25 12:12:21.849038] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:26 nsid:1 lba:15504 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.785 [2024-07-25 12:12:21.849054] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:26 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.785 [2024-07-25 12:12:21.858049] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.785 [2024-07-25 12:12:21.858070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:1747 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.785 [2024-07-25 12:12:21.858079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.785 [2024-07-25 12:12:21.867549] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.785 [2024-07-25 12:12:21.867569] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:1608 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.785 [2024-07-25 12:12:21.867581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.785 [2024-07-25 12:12:21.877269] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.785 [2024-07-25 12:12:21.877289] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:13172 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.785 [2024-07-25 12:12:21.877297] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:24 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.785 [2024-07-25 12:12:21.886531] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.785 [2024-07-25 12:12:21.886552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:117 nsid:1 lba:17457 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.785 [2024-07-25 12:12:21.886560] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:117 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.785 [2024-07-25 12:12:21.897083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.785 [2024-07-25 12:12:21.897104] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:20 nsid:1 lba:3772 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.785 [2024-07-25 12:12:21.897112] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:20 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.785 [2024-07-25 12:12:21.905309] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.785 [2024-07-25 12:12:21.905330] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:2809 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.785 [2024-07-25 12:12:21.905338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:96 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.785 [2024-07-25 12:12:21.915305] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.785 [2024-07-25 12:12:21.915325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:39 nsid:1 lba:4826 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.785 [2024-07-25 12:12:21.915334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:39 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.785 [2024-07-25 12:12:21.923885] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.785 [2024-07-25 12:12:21.923907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:118 nsid:1 lba:22972 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.785 [2024-07-25 12:12:21.923915] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:118 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.785 [2024-07-25 12:12:21.934235] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.785 [2024-07-25 12:12:21.934256] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:19878 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.785 [2024-07-25 12:12:21.934264] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:90 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.785 [2024-07-25 12:12:21.943127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0xb094f0) 00:26:34.785 [2024-07-25 12:12:21.943148] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:58 len:1 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:34.785 [2024-07-25 12:12:21.943157] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:34.785 00:26:34.785 Latency(us) 00:26:34.785 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:34.785 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:26:34.785 nvme0n1 : 2.00 26080.88 101.88 0.00 0.00 4901.83 2436.23 27354.16 00:26:34.785 =================================================================================================================== 00:26:34.785 Total : 26080.88 101.88 0.00 0.00 4901.83 2436.23 27354.16 00:26:34.785 0 00:26:34.785 12:12:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:34.785 12:12:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:34.785 12:12:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:34.785 | .driver_specific 00:26:34.785 | .nvme_error 00:26:34.785 | .status_code 00:26:34.785 | .command_transient_transport_error' 00:26:34.785 12:12:21 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:35.045 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 204 > 0 )) 00:26:35.045 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 472470 00:26:35.045 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 472470 ']' 00:26:35.045 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 472470 00:26:35.045 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:35.045 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:35.045 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 472470 00:26:35.045 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:35.045 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:35.045 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 472470' 00:26:35.045 killing process with pid 472470 00:26:35.045 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 472470 00:26:35.045 Received shutdown signal, test time was about 2.000000 seconds 00:26:35.045 00:26:35.045 Latency(us) 00:26:35.045 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:35.045 =================================================================================================================== 00:26:35.045 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:35.045 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 472470 00:26:35.323 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@109 -- # run_bperf_err randread 131072 16 00:26:35.323 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:35.323 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randread 00:26:35.323 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:35.323 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:35.323 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=473164 00:26:35.323 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 473164 /var/tmp/bperf.sock 00:26:35.323 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randread -o 131072 -t 2 -q 16 -z 00:26:35.323 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 473164 ']' 00:26:35.323 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:35.323 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:35.323 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:35.323 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:35.323 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:35.323 12:12:22 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:35.323 [2024-07-25 12:12:22.427233] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:26:35.323 [2024-07-25 12:12:22.427281] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473164 ] 00:26:35.323 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:35.323 Zero copy mechanism will not be used. 00:26:35.323 EAL: No free 2048 kB hugepages reported on node 1 00:26:35.323 [2024-07-25 12:12:22.481898] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:35.323 [2024-07-25 12:12:22.551576] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:36.261 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:36.261 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:36.261 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:36.261 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:36.261 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:36.261 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.261 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:36.261 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.261 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.261 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:36.831 nvme0n1 00:26:36.831 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:36.831 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:36.831 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:36.831 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:36.831 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:36.831 12:12:23 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:36.831 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:36.831 Zero copy mechanism will not be used. 00:26:36.831 Running I/O for 2 seconds... 00:26:36.831 [2024-07-25 12:12:23.974736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:36.831 [2024-07-25 12:12:23.974774] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.831 [2024-07-25 12:12:23.974784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.831 [2024-07-25 12:12:23.989964] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:36.831 [2024-07-25 12:12:23.989990] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.831 [2024-07-25 12:12:23.989999] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.831 [2024-07-25 12:12:24.004662] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:36.831 [2024-07-25 12:12:24.004688] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.831 [2024-07-25 12:12:24.004696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.831 [2024-07-25 12:12:24.019194] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:36.831 [2024-07-25 12:12:24.019217] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.831 [2024-07-25 12:12:24.019226] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:36.831 [2024-07-25 12:12:24.033668] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:36.831 [2024-07-25 12:12:24.033693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.831 [2024-07-25 12:12:24.033702] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:36.831 [2024-07-25 12:12:24.047678] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:36.831 [2024-07-25 12:12:24.047701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16768 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.831 [2024-07-25 12:12:24.047710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:36.831 [2024-07-25 12:12:24.061771] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:36.831 [2024-07-25 12:12:24.061794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.831 [2024-07-25 12:12:24.061803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:36.831 [2024-07-25 12:12:24.076127] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:36.831 [2024-07-25 12:12:24.076147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:36.831 [2024-07-25 12:12:24.076155] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.091 [2024-07-25 12:12:24.101158] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.091 [2024-07-25 12:12:24.101180] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22112 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.091 [2024-07-25 12:12:24.101192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.091 [2024-07-25 12:12:24.117556] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.091 [2024-07-25 12:12:24.117577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.091 [2024-07-25 12:12:24.117585] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.091 [2024-07-25 12:12:24.141641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.091 [2024-07-25 12:12:24.141662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21760 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.091 [2024-07-25 12:12:24.141671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.091 [2024-07-25 12:12:24.157569] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.091 [2024-07-25 12:12:24.157590] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.092 [2024-07-25 12:12:24.157598] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.092 [2024-07-25 12:12:24.171914] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.092 [2024-07-25 12:12:24.171934] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7456 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.092 [2024-07-25 12:12:24.171942] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.092 [2024-07-25 12:12:24.192154] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.092 [2024-07-25 12:12:24.192175] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.092 [2024-07-25 12:12:24.192183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.092 [2024-07-25 12:12:24.210066] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.092 [2024-07-25 12:12:24.210086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18240 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.092 [2024-07-25 12:12:24.210094] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.092 [2024-07-25 12:12:24.224258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.092 [2024-07-25 12:12:24.224278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10624 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.092 [2024-07-25 12:12:24.224286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.092 [2024-07-25 12:12:24.238716] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.092 [2024-07-25 12:12:24.238736] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16480 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.092 [2024-07-25 12:12:24.238744] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.092 [2024-07-25 12:12:24.252574] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.092 [2024-07-25 12:12:24.252597] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.092 [2024-07-25 12:12:24.252606] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.092 [2024-07-25 12:12:24.267648] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.092 [2024-07-25 12:12:24.267668] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.092 [2024-07-25 12:12:24.267676] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.092 [2024-07-25 12:12:24.281623] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.092 [2024-07-25 12:12:24.281643] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.092 [2024-07-25 12:12:24.281651] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.092 [2024-07-25 12:12:24.302900] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.092 [2024-07-25 12:12:24.302920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.092 [2024-07-25 12:12:24.302928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.092 [2024-07-25 12:12:24.321421] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.092 [2024-07-25 12:12:24.321442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.092 [2024-07-25 12:12:24.321450] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.092 [2024-07-25 12:12:24.336610] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.092 [2024-07-25 12:12:24.336631] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.092 [2024-07-25 12:12:24.336639] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.352 [2024-07-25 12:12:24.350639] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.352 [2024-07-25 12:12:24.350660] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.352 [2024-07-25 12:12:24.350669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.352 [2024-07-25 12:12:24.374244] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.352 [2024-07-25 12:12:24.374264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16608 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.352 [2024-07-25 12:12:24.374273] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.352 [2024-07-25 12:12:24.389762] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.352 [2024-07-25 12:12:24.389783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.352 [2024-07-25 12:12:24.389791] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.352 [2024-07-25 12:12:24.403805] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.352 [2024-07-25 12:12:24.403825] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.352 [2024-07-25 12:12:24.403834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.352 [2024-07-25 12:12:24.417732] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.352 [2024-07-25 12:12:24.417753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7744 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.352 [2024-07-25 12:12:24.417761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.352 [2024-07-25 12:12:24.431772] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.352 [2024-07-25 12:12:24.431793] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.352 [2024-07-25 12:12:24.431801] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.352 [2024-07-25 12:12:24.455057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.352 [2024-07-25 12:12:24.455077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.352 [2024-07-25 12:12:24.455085] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.352 [2024-07-25 12:12:24.470124] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.352 [2024-07-25 12:12:24.470144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.352 [2024-07-25 12:12:24.470152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.352 [2024-07-25 12:12:24.491271] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.352 [2024-07-25 12:12:24.491293] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.352 [2024-07-25 12:12:24.491302] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.352 [2024-07-25 12:12:24.513444] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.352 [2024-07-25 12:12:24.513466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.352 [2024-07-25 12:12:24.513474] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.352 [2024-07-25 12:12:24.530460] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.352 [2024-07-25 12:12:24.530482] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.352 [2024-07-25 12:12:24.530490] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.352 [2024-07-25 12:12:24.557593] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.352 [2024-07-25 12:12:24.557616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.352 [2024-07-25 12:12:24.557628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.352 [2024-07-25 12:12:24.573882] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.352 [2024-07-25 12:12:24.573903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.352 [2024-07-25 12:12:24.573912] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.352 [2024-07-25 12:12:24.598037] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.352 [2024-07-25 12:12:24.598064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.352 [2024-07-25 12:12:24.598073] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.612 [2024-07-25 12:12:24.613129] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.612 [2024-07-25 12:12:24.613151] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.612 [2024-07-25 12:12:24.613160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.612 [2024-07-25 12:12:24.627161] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.612 [2024-07-25 12:12:24.627183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.612 [2024-07-25 12:12:24.627191] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.612 [2024-07-25 12:12:24.640940] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.612 [2024-07-25 12:12:24.640961] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.612 [2024-07-25 12:12:24.640969] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.612 [2024-07-25 12:12:24.654987] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.612 [2024-07-25 12:12:24.655007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.612 [2024-07-25 12:12:24.655015] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.613 [2024-07-25 12:12:24.669430] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.613 [2024-07-25 12:12:24.669450] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.613 [2024-07-25 12:12:24.669458] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.613 [2024-07-25 12:12:24.683551] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.613 [2024-07-25 12:12:24.683571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.613 [2024-07-25 12:12:24.683579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.613 [2024-07-25 12:12:24.697490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.613 [2024-07-25 12:12:24.697511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.613 [2024-07-25 12:12:24.697519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.613 [2024-07-25 12:12:24.711576] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.613 [2024-07-25 12:12:24.711596] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10176 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.613 [2024-07-25 12:12:24.711604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.613 [2024-07-25 12:12:24.725712] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.613 [2024-07-25 12:12:24.725733] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.613 [2024-07-25 12:12:24.725741] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.613 [2024-07-25 12:12:24.739997] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.613 [2024-07-25 12:12:24.740017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.613 [2024-07-25 12:12:24.740026] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.613 [2024-07-25 12:12:24.754490] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.613 [2024-07-25 12:12:24.754511] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.613 [2024-07-25 12:12:24.754519] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.613 [2024-07-25 12:12:24.768522] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.613 [2024-07-25 12:12:24.768543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5184 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.613 [2024-07-25 12:12:24.768551] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.613 [2024-07-25 12:12:24.782865] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.613 [2024-07-25 12:12:24.782886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.613 [2024-07-25 12:12:24.782894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.613 [2024-07-25 12:12:24.797253] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.613 [2024-07-25 12:12:24.797274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.613 [2024-07-25 12:12:24.797282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.613 [2024-07-25 12:12:24.811637] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.613 [2024-07-25 12:12:24.811657] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.613 [2024-07-25 12:12:24.811669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.613 [2024-07-25 12:12:24.825927] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.613 [2024-07-25 12:12:24.825947] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18048 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.613 [2024-07-25 12:12:24.825955] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.613 [2024-07-25 12:12:24.839945] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.613 [2024-07-25 12:12:24.839964] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.613 [2024-07-25 12:12:24.839973] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.613 [2024-07-25 12:12:24.854028] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.613 [2024-07-25 12:12:24.854054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.613 [2024-07-25 12:12:24.854063] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.873 [2024-07-25 12:12:24.868171] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.873 [2024-07-25 12:12:24.868193] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.873 [2024-07-25 12:12:24.868201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.873 [2024-07-25 12:12:24.882329] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.873 [2024-07-25 12:12:24.882349] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.873 [2024-07-25 12:12:24.882357] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.873 [2024-07-25 12:12:24.896470] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.873 [2024-07-25 12:12:24.896490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:416 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.873 [2024-07-25 12:12:24.896499] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.873 [2024-07-25 12:12:24.910685] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.873 [2024-07-25 12:12:24.910706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11424 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.873 [2024-07-25 12:12:24.910714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.873 [2024-07-25 12:12:24.924804] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.873 [2024-07-25 12:12:24.924824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.873 [2024-07-25 12:12:24.924833] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.873 [2024-07-25 12:12:24.938998] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.873 [2024-07-25 12:12:24.939023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:15200 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.873 [2024-07-25 12:12:24.939031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.873 [2024-07-25 12:12:24.953120] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.873 [2024-07-25 12:12:24.953141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8640 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.873 [2024-07-25 12:12:24.953149] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.873 [2024-07-25 12:12:24.967341] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.873 [2024-07-25 12:12:24.967362] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.873 [2024-07-25 12:12:24.967370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.873 [2024-07-25 12:12:24.981353] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.873 [2024-07-25 12:12:24.981374] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.873 [2024-07-25 12:12:24.981383] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.873 [2024-07-25 12:12:24.995483] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.873 [2024-07-25 12:12:24.995504] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.873 [2024-07-25 12:12:24.995513] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.873 [2024-07-25 12:12:25.009619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.873 [2024-07-25 12:12:25.009640] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.873 [2024-07-25 12:12:25.009648] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.873 [2024-07-25 12:12:25.023758] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.873 [2024-07-25 12:12:25.023779] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6720 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.873 [2024-07-25 12:12:25.023787] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.873 [2024-07-25 12:12:25.037901] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.873 [2024-07-25 12:12:25.037922] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5632 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.873 [2024-07-25 12:12:25.037930] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.873 [2024-07-25 12:12:25.052170] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.873 [2024-07-25 12:12:25.052191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.873 [2024-07-25 12:12:25.052199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:37.873 [2024-07-25 12:12:25.066348] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.873 [2024-07-25 12:12:25.066369] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.873 [2024-07-25 12:12:25.066377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:37.873 [2024-07-25 12:12:25.080518] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.873 [2024-07-25 12:12:25.080538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.873 [2024-07-25 12:12:25.080546] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:37.873 [2024-07-25 12:12:25.094749] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.873 [2024-07-25 12:12:25.094770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.873 [2024-07-25 12:12:25.094778] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:37.873 [2024-07-25 12:12:25.109003] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:37.873 [2024-07-25 12:12:25.109023] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17344 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:37.873 [2024-07-25 12:12:25.109031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.133 [2024-07-25 12:12:25.123054] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.133 [2024-07-25 12:12:25.123076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.133 [2024-07-25 12:12:25.123084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.133 [2024-07-25 12:12:25.137224] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.133 [2024-07-25 12:12:25.137244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:3872 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.133 [2024-07-25 12:12:25.137253] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.133 [2024-07-25 12:12:25.151365] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.133 [2024-07-25 12:12:25.151386] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16544 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.133 [2024-07-25 12:12:25.151394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.133 [2024-07-25 12:12:25.165629] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.133 [2024-07-25 12:12:25.165649] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.133 [2024-07-25 12:12:25.165657] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.133 [2024-07-25 12:12:25.180057] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.133 [2024-07-25 12:12:25.180076] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.134 [2024-07-25 12:12:25.180088] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.134 [2024-07-25 12:12:25.194320] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.134 [2024-07-25 12:12:25.194340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.134 [2024-07-25 12:12:25.194348] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.134 [2024-07-25 12:12:25.208254] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.134 [2024-07-25 12:12:25.208274] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.134 [2024-07-25 12:12:25.208282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.134 [2024-07-25 12:12:25.222420] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.134 [2024-07-25 12:12:25.222441] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.134 [2024-07-25 12:12:25.222449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.134 [2024-07-25 12:12:25.236598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.134 [2024-07-25 12:12:25.236618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14016 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.134 [2024-07-25 12:12:25.236626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.134 [2024-07-25 12:12:25.250736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.134 [2024-07-25 12:12:25.250756] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:2816 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.134 [2024-07-25 12:12:25.250764] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.134 [2024-07-25 12:12:25.264904] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.134 [2024-07-25 12:12:25.264924] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:14368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.134 [2024-07-25 12:12:25.264933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.134 [2024-07-25 12:12:25.279136] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.134 [2024-07-25 12:12:25.279156] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:21088 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.134 [2024-07-25 12:12:25.279163] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.134 [2024-07-25 12:12:25.293077] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.134 [2024-07-25 12:12:25.293096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5664 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.134 [2024-07-25 12:12:25.293105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.134 [2024-07-25 12:12:25.307303] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.134 [2024-07-25 12:12:25.307323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:6080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.134 [2024-07-25 12:12:25.307331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.134 [2024-07-25 12:12:25.321258] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.134 [2024-07-25 12:12:25.321278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.134 [2024-07-25 12:12:25.321285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.134 [2024-07-25 12:12:25.335474] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.134 [2024-07-25 12:12:25.335495] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:16800 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.134 [2024-07-25 12:12:25.335503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.134 [2024-07-25 12:12:25.349720] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.134 [2024-07-25 12:12:25.349741] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7584 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.134 [2024-07-25 12:12:25.349749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.134 [2024-07-25 12:12:25.363951] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.134 [2024-07-25 12:12:25.363971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.134 [2024-07-25 12:12:25.363979] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.134 [2024-07-25 12:12:25.378083] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.134 [2024-07-25 12:12:25.378103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.134 [2024-07-25 12:12:25.378110] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.394 [2024-07-25 12:12:25.392217] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.394 [2024-07-25 12:12:25.392237] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13056 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.394 [2024-07-25 12:12:25.392246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.394 [2024-07-25 12:12:25.406404] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.394 [2024-07-25 12:12:25.406424] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.394 [2024-07-25 12:12:25.406432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.394 [2024-07-25 12:12:25.420481] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.394 [2024-07-25 12:12:25.420502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.394 [2024-07-25 12:12:25.420514] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.394 [2024-07-25 12:12:25.434547] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.395 [2024-07-25 12:12:25.434568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7296 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.395 [2024-07-25 12:12:25.434576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.395 [2024-07-25 12:12:25.448386] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.395 [2024-07-25 12:12:25.448406] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24032 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.395 [2024-07-25 12:12:25.448414] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.395 [2024-07-25 12:12:25.462427] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.395 [2024-07-25 12:12:25.462447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.395 [2024-07-25 12:12:25.462455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.395 [2024-07-25 12:12:25.476375] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.395 [2024-07-25 12:12:25.476395] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.395 [2024-07-25 12:12:25.476404] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.395 [2024-07-25 12:12:25.490336] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.395 [2024-07-25 12:12:25.490355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:96 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.395 [2024-07-25 12:12:25.490363] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.395 [2024-07-25 12:12:25.504505] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.395 [2024-07-25 12:12:25.504524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.395 [2024-07-25 12:12:25.504532] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.395 [2024-07-25 12:12:25.518632] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.395 [2024-07-25 12:12:25.518652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10848 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.395 [2024-07-25 12:12:25.518660] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.395 [2024-07-25 12:12:25.532681] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.395 [2024-07-25 12:12:25.532701] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.395 [2024-07-25 12:12:25.532710] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.395 [2024-07-25 12:12:25.546799] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.395 [2024-07-25 12:12:25.546822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.395 [2024-07-25 12:12:25.546831] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.395 [2024-07-25 12:12:25.561060] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.395 [2024-07-25 12:12:25.561081] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.395 [2024-07-25 12:12:25.561089] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.395 [2024-07-25 12:12:25.575207] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.395 [2024-07-25 12:12:25.575228] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:13984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.395 [2024-07-25 12:12:25.575236] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.395 [2024-07-25 12:12:25.589411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.395 [2024-07-25 12:12:25.589430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23136 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.395 [2024-07-25 12:12:25.589439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.395 [2024-07-25 12:12:25.603791] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.395 [2024-07-25 12:12:25.603811] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11264 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.395 [2024-07-25 12:12:25.603819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.395 [2024-07-25 12:12:25.618008] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.395 [2024-07-25 12:12:25.618028] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.395 [2024-07-25 12:12:25.618036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.395 [2024-07-25 12:12:25.632276] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.395 [2024-07-25 12:12:25.632295] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10080 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.395 [2024-07-25 12:12:25.632304] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.656 [2024-07-25 12:12:25.646424] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.656 [2024-07-25 12:12:25.646445] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.656 [2024-07-25 12:12:25.646453] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.656 [2024-07-25 12:12:25.660411] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.656 [2024-07-25 12:12:25.660431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:20896 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.656 [2024-07-25 12:12:25.660439] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.656 [2024-07-25 12:12:25.674475] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.656 [2024-07-25 12:12:25.674496] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:25120 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.656 [2024-07-25 12:12:25.674504] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.656 [2024-07-25 12:12:25.688598] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.656 [2024-07-25 12:12:25.688618] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.656 [2024-07-25 12:12:25.688626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.656 [2024-07-25 12:12:25.702540] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.656 [2024-07-25 12:12:25.702559] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.656 [2024-07-25 12:12:25.702567] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.656 [2024-07-25 12:12:25.716773] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.656 [2024-07-25 12:12:25.716792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:11936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.656 [2024-07-25 12:12:25.716800] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.656 [2024-07-25 12:12:25.730736] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.656 [2024-07-25 12:12:25.730757] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.656 [2024-07-25 12:12:25.730765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.656 [2024-07-25 12:12:25.744619] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.656 [2024-07-25 12:12:25.744639] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:19488 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.656 [2024-07-25 12:12:25.744647] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.656 [2024-07-25 12:12:25.758803] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.656 [2024-07-25 12:12:25.758824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:23968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.656 [2024-07-25 12:12:25.758832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.656 [2024-07-25 12:12:25.773147] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.656 [2024-07-25 12:12:25.773166] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:12960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.656 [2024-07-25 12:12:25.773174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.656 [2024-07-25 12:12:25.787230] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.656 [2024-07-25 12:12:25.787250] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:9504 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.656 [2024-07-25 12:12:25.787260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.656 [2024-07-25 12:12:25.801342] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.656 [2024-07-25 12:12:25.801363] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.656 [2024-07-25 12:12:25.801370] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.656 [2024-07-25 12:12:25.815286] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.657 [2024-07-25 12:12:25.815306] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:1792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.657 [2024-07-25 12:12:25.815313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.657 [2024-07-25 12:12:25.829604] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.657 [2024-07-25 12:12:25.829625] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18368 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.657 [2024-07-25 12:12:25.829633] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.657 [2024-07-25 12:12:25.843641] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.657 [2024-07-25 12:12:25.843661] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.657 [2024-07-25 12:12:25.843669] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.657 [2024-07-25 12:12:25.857567] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.657 [2024-07-25 12:12:25.857588] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:18496 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.657 [2024-07-25 12:12:25.857596] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.657 [2024-07-25 12:12:25.871714] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.657 [2024-07-25 12:12:25.871735] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:10400 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.657 [2024-07-25 12:12:25.871743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.657 [2024-07-25 12:12:25.885857] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.657 [2024-07-25 12:12:25.885877] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:7648 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.657 [2024-07-25 12:12:25.885885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.657 [2024-07-25 12:12:25.900177] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.657 [2024-07-25 12:12:25.900196] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22560 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.657 [2024-07-25 12:12:25.900204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:38.917 [2024-07-25 12:12:25.914192] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.917 [2024-07-25 12:12:25.914214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:22464 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.917 [2024-07-25 12:12:25.914222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:38.917 [2024-07-25 12:12:25.928257] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.917 [2024-07-25 12:12:25.928278] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:5408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.917 [2024-07-25 12:12:25.928286] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:38.917 [2024-07-25 12:12:25.942495] nvme_tcp.c:1459:nvme_tcp_accel_seq_recv_compute_crc32_done: *ERROR*: data digest error on tqpair=(0x1d50030) 00:26:38.917 [2024-07-25 12:12:25.942515] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:15 nsid:1 lba:8928 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:38.917 [2024-07-25 12:12:25.942523] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:38.917 00:26:38.917 Latency(us) 00:26:38.917 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:38.917 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 16, IO size: 131072) 00:26:38.917 nvme0n1 : 2.00 2048.90 256.11 0.00 0.00 7806.64 6781.55 26214.40 00:26:38.917 =================================================================================================================== 00:26:38.917 Total : 2048.90 256.11 0.00 0.00 7806.64 6781.55 26214.40 00:26:38.917 0 00:26:38.917 12:12:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:38.917 12:12:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:38.917 12:12:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:38.917 | .driver_specific 00:26:38.917 | .nvme_error 00:26:38.917 | .status_code 00:26:38.917 | .command_transient_transport_error' 00:26:38.917 12:12:25 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:38.917 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 132 > 0 )) 00:26:38.917 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 473164 00:26:38.917 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 473164 ']' 00:26:38.917 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 473164 00:26:38.917 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:38.917 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:38.917 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 473164 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 473164' 00:26:39.177 killing process with pid 473164 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 473164 00:26:39.177 Received shutdown signal, test time was about 2.000000 seconds 00:26:39.177 00:26:39.177 Latency(us) 00:26:39.177 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:39.177 =================================================================================================================== 00:26:39.177 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 473164 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@114 -- # run_bperf_err randwrite 4096 128 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=4096 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=128 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=473853 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 473853 /var/tmp/bperf.sock 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 4096 -t 2 -q 128 -z 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 473853 ']' 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:39.177 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:39.177 12:12:26 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:39.436 [2024-07-25 12:12:26.430286] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:26:39.436 [2024-07-25 12:12:26.430332] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid473853 ] 00:26:39.436 EAL: No free 2048 kB hugepages reported on node 1 00:26:39.436 [2024-07-25 12:12:26.484228] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:39.436 [2024-07-25 12:12:26.564215] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:40.005 12:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:40.005 12:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:40.006 12:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:40.006 12:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:40.264 12:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:40.264 12:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.264 12:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:40.264 12:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.264 12:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.264 12:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:40.833 nvme0n1 00:26:40.833 12:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 256 00:26:40.833 12:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:40.833 12:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:40.833 12:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:40.833 12:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:40.833 12:12:27 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:40.833 Running I/O for 2 seconds... 00:26:40.833 [2024-07-25 12:12:27.927978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:40.833 [2024-07-25 12:12:27.928803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:13912 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.833 [2024-07-25 12:12:27.928834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007a p:0 m:0 dnr:0 00:26:40.833 [2024-07-25 12:12:27.937526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:40.833 [2024-07-25 12:12:27.937797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:14323 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.833 [2024-07-25 12:12:27.937821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.833 [2024-07-25 12:12:27.947071] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:40.833 [2024-07-25 12:12:27.947304] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:13178 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.833 [2024-07-25 12:12:27.947324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.833 [2024-07-25 12:12:27.956507] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:40.833 [2024-07-25 12:12:27.956740] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:4198 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.833 [2024-07-25 12:12:27.956759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.833 [2024-07-25 12:12:27.965932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:40.833 [2024-07-25 12:12:27.966167] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:12324 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.833 [2024-07-25 12:12:27.966186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.833 [2024-07-25 12:12:27.975370] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:40.833 [2024-07-25 12:12:27.975603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:20008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.833 [2024-07-25 12:12:27.975622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.833 [2024-07-25 12:12:27.984785] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:40.833 [2024-07-25 12:12:27.985015] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:1869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.833 [2024-07-25 12:12:27.985035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.833 [2024-07-25 12:12:27.994215] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:40.833 [2024-07-25 12:12:27.994446] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:18054 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.833 [2024-07-25 12:12:27.994465] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.833 [2024-07-25 12:12:28.003638] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:40.834 [2024-07-25 12:12:28.003866] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:6046 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.834 [2024-07-25 12:12:28.003885] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.834 [2024-07-25 12:12:28.013030] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:40.834 [2024-07-25 12:12:28.013270] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.834 [2024-07-25 12:12:28.013288] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.834 [2024-07-25 12:12:28.022462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:40.834 [2024-07-25 12:12:28.022693] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:8237 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.834 [2024-07-25 12:12:28.022712] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.834 [2024-07-25 12:12:28.031852] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:40.834 [2024-07-25 12:12:28.032084] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:14427 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.834 [2024-07-25 12:12:28.032103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.834 [2024-07-25 12:12:28.041288] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:40.834 [2024-07-25 12:12:28.041518] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:24611 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.834 [2024-07-25 12:12:28.041538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.834 [2024-07-25 12:12:28.050656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:40.834 [2024-07-25 12:12:28.050887] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:125 nsid:1 lba:16915 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.834 [2024-07-25 12:12:28.050906] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:125 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.834 [2024-07-25 12:12:28.060079] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:40.834 [2024-07-25 12:12:28.060311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:5490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.834 [2024-07-25 12:12:28.060334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.834 [2024-07-25 12:12:28.069437] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:40.834 [2024-07-25 12:12:28.070051] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:6216 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.834 [2024-07-25 12:12:28.070070] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:3 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:40.834 [2024-07-25 12:12:28.078879] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:40.834 [2024-07-25 12:12:28.079110] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:11905 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:40.834 [2024-07-25 12:12:28.079130] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.094 [2024-07-25 12:12:28.088305] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:41.094 [2024-07-25 12:12:28.088535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:2574 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.094 [2024-07-25 12:12:28.088554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.094 [2024-07-25 12:12:28.097782] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:41.094 [2024-07-25 12:12:28.098013] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:3808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.094 [2024-07-25 12:12:28.098031] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.094 [2024-07-25 12:12:28.107193] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:41.094 [2024-07-25 12:12:28.107422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:20636 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.094 [2024-07-25 12:12:28.107440] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.094 [2024-07-25 12:12:28.116552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:41.094 [2024-07-25 12:12:28.117144] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:9971 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.094 [2024-07-25 12:12:28.117162] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.094 [2024-07-25 12:12:28.125958] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:41.094 [2024-07-25 12:12:28.126609] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:9331 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.094 [2024-07-25 12:12:28.126629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.094 [2024-07-25 12:12:28.135466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:41.094 [2024-07-25 12:12:28.135730] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:1707 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.094 [2024-07-25 12:12:28.135749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.094 [2024-07-25 12:12:28.144860] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:41.094 [2024-07-25 12:12:28.145294] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:10648 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.094 [2024-07-25 12:12:28.145313] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.094 [2024-07-25 12:12:28.154493] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:41.094 [2024-07-25 12:12:28.154725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:21290 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.094 [2024-07-25 12:12:28.154743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.094 [2024-07-25 12:12:28.163904] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:41.094 [2024-07-25 12:12:28.164147] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:8893 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.094 [2024-07-25 12:12:28.164165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.094 [2024-07-25 12:12:28.173217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:41.094 [2024-07-25 12:12:28.173692] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:8059 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.094 [2024-07-25 12:12:28.173711] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.094 [2024-07-25 12:12:28.182677] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:41.094 [2024-07-25 12:12:28.182902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:24433 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.094 [2024-07-25 12:12:28.182920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.094 [2024-07-25 12:12:28.192240] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:41.094 [2024-07-25 12:12:28.192470] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:10305 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.094 [2024-07-25 12:12:28.192489] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.095 [2024-07-25 12:12:28.201629] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:41.095 [2024-07-25 12:12:28.202125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:3412 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.095 [2024-07-25 12:12:28.202143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.095 [2024-07-25 12:12:28.210866] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fb8b8 00:26:41.095 [2024-07-25 12:12:28.212125] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:4 nsid:1 lba:10852 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.095 [2024-07-25 12:12:28.212143] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:4 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.095 [2024-07-25 12:12:28.221881] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fef90 00:26:41.095 [2024-07-25 12:12:28.222861] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:111 nsid:1 lba:20549 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.095 [2024-07-25 12:12:28.222880] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:111 cdw0:0 sqhd:006c p:0 m:0 dnr:0 00:26:41.095 [2024-07-25 12:12:28.232684] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fc560 00:26:41.095 [2024-07-25 12:12:28.233909] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:24774 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.095 [2024-07-25 12:12:28.233928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:2 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:41.095 [2024-07-25 12:12:28.241799] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fef90 00:26:41.095 [2024-07-25 12:12:28.243052] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:1 nsid:1 lba:1426 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.095 [2024-07-25 12:12:28.243072] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:1 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:41.095 [2024-07-25 12:12:28.250835] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fc560 00:26:41.095 [2024-07-25 12:12:28.252101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:17645 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.095 [2024-07-25 12:12:28.252120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:13 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:41.095 [2024-07-25 12:12:28.259905] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fef90 00:26:41.095 [2024-07-25 12:12:28.261185] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:3623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.095 [2024-07-25 12:12:28.261204] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:12 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:41.095 [2024-07-25 12:12:28.268873] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fc560 00:26:41.095 [2024-07-25 12:12:28.270325] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:6 nsid:1 lba:24247 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.095 [2024-07-25 12:12:28.270345] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:6 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:41.095 [2024-07-25 12:12:28.277912] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fef90 00:26:41.095 [2024-07-25 12:12:28.279360] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:960 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.095 [2024-07-25 12:12:28.279378] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:5 cdw0:0 sqhd:0073 p:0 m:0 dnr:0 00:26:41.095 [2024-07-25 12:12:28.287002] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190ebfd0 00:26:41.095 [2024-07-25 12:12:28.288670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.095 [2024-07-25 12:12:28.288688] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:32 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:41.095 [2024-07-25 12:12:28.298867] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190eb760 00:26:41.095 [2024-07-25 12:12:28.300354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:10521 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.095 [2024-07-25 12:12:28.300373] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:14 cdw0:0 sqhd:007d p:0 m:0 dnr:0 00:26:41.095 [2024-07-25 12:12:28.308885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190ec840 00:26:41.095 [2024-07-25 12:12:28.309146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:1408 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.095 [2024-07-25 12:12:28.309165] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.095 [2024-07-25 12:12:28.318340] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190ec840 00:26:41.095 [2024-07-25 12:12:28.318577] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:20761 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.095 [2024-07-25 12:12:28.318595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.095 [2024-07-25 12:12:28.327803] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190ec840 00:26:41.095 [2024-07-25 12:12:28.328047] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:3360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.095 [2024-07-25 12:12:28.328067] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.095 [2024-07-25 12:12:28.337142] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190ec840 00:26:41.095 [2024-07-25 12:12:28.337379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:10507 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.095 [2024-07-25 12:12:28.337399] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.355 [2024-07-25 12:12:28.346667] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190ec840 00:26:41.355 [2024-07-25 12:12:28.346906] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:13314 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.355 [2024-07-25 12:12:28.346925] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.355 [2024-07-25 12:12:28.356032] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190ec840 00:26:41.355 [2024-07-25 12:12:28.356724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:17862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.355 [2024-07-25 12:12:28.356743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.365466] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190ec840 00:26:41.356 [2024-07-25 12:12:28.365737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:18950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.365755] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.374902] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190ec840 00:26:41.356 [2024-07-25 12:12:28.375340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:122 nsid:1 lba:7137 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.375360] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:122 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.384219] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190ec840 00:26:41.356 [2024-07-25 12:12:28.386039] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:6790 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.386062] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:121 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.394998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190edd58 00:26:41.356 [2024-07-25 12:12:28.396011] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:9951 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.396033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.404462] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190edd58 00:26:41.356 [2024-07-25 12:12:28.404713] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:257 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.404732] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.413818] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190edd58 00:26:41.356 [2024-07-25 12:12:28.414357] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:11113 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.414376] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.423498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190edd58 00:26:41.356 [2024-07-25 12:12:28.423742] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:23757 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.423761] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.432932] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190edd58 00:26:41.356 [2024-07-25 12:12:28.433181] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:6256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.433199] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.442438] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190edd58 00:26:41.356 [2024-07-25 12:12:28.444164] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:37 nsid:1 lba:24519 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.444182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:37 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.455459] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190ed0b0 00:26:41.356 [2024-07-25 12:12:28.456474] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:68 nsid:1 lba:3942 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.456493] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:68 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.466387] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e7c50 00:26:41.356 [2024-07-25 12:12:28.467524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:2052 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.467543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.475410] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e8d30 00:26:41.356 [2024-07-25 12:12:28.476555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:1904 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.476574] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.484547] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f4298 00:26:41.356 [2024-07-25 12:12:28.485790] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:1149 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.485809] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.493612] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190ec840 00:26:41.356 [2024-07-25 12:12:28.495491] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:25484 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.495510] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:70 cdw0:0 sqhd:0072 p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.504092] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.356 [2024-07-25 12:12:28.505379] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:23750 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.505398] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:71 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.514502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.356 [2024-07-25 12:12:28.515546] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:24522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.515565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:57 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.523930] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.356 [2024-07-25 12:12:28.524336] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:46 nsid:1 lba:6072 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.524355] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:46 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.533361] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.356 [2024-07-25 12:12:28.533560] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:15737 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.533579] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.542746] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.356 [2024-07-25 12:12:28.542942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:3363 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.542959] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.552144] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.356 [2024-07-25 12:12:28.552340] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:9815 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.552359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.561526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.356 [2024-07-25 12:12:28.562049] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:11902 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.562068] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.570947] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.356 [2024-07-25 12:12:28.571143] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:21185 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.571160] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.580346] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.356 [2024-07-25 12:12:28.580544] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:15000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.580561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.589766] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.356 [2024-07-25 12:12:28.589962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:3808 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.589981] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.356 [2024-07-25 12:12:28.599229] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.356 [2024-07-25 12:12:28.599662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:69 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.356 [2024-07-25 12:12:28.599680] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.617 [2024-07-25 12:12:28.608776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.617 [2024-07-25 12:12:28.609016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:14051 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.617 [2024-07-25 12:12:28.609035] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.617 [2024-07-25 12:12:28.618263] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.617 [2024-07-25 12:12:28.618919] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:126 nsid:1 lba:21522 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.617 [2024-07-25 12:12:28.618937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:126 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.617 [2024-07-25 12:12:28.627604] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.617 [2024-07-25 12:12:28.627801] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:110 nsid:1 lba:545 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.617 [2024-07-25 12:12:28.627820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:110 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.617 [2024-07-25 12:12:28.637012] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.617 [2024-07-25 12:12:28.637423] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:9458 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.617 [2024-07-25 12:12:28.637442] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:119 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.617 [2024-07-25 12:12:28.646426] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.617 [2024-07-25 12:12:28.646623] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:20818 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.617 [2024-07-25 12:12:28.646644] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.617 [2024-07-25 12:12:28.655775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.617 [2024-07-25 12:12:28.656067] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:20462 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.617 [2024-07-25 12:12:28.656086] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.617 [2024-07-25 12:12:28.665188] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.617 [2024-07-25 12:12:28.665869] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:6080 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.617 [2024-07-25 12:12:28.665887] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:40 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.617 [2024-07-25 12:12:28.674561] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.617 [2024-07-25 12:12:28.674754] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:9945 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.617 [2024-07-25 12:12:28.674780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.617 [2024-07-25 12:12:28.683959] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.617 [2024-07-25 12:12:28.684162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:9071 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.617 [2024-07-25 12:12:28.684180] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.617 [2024-07-25 12:12:28.693508] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.618 [2024-07-25 12:12:28.693705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:2555 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.618 [2024-07-25 12:12:28.693731] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.618 [2024-07-25 12:12:28.702893] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.618 [2024-07-25 12:12:28.703087] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:19544 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.618 [2024-07-25 12:12:28.703105] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.618 [2024-07-25 12:12:28.712343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.618 [2024-07-25 12:12:28.712539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:22723 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.618 [2024-07-25 12:12:28.712558] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:74 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.618 [2024-07-25 12:12:28.721647] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f57b0 00:26:41.618 [2024-07-25 12:12:28.723789] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:13965 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.618 [2024-07-25 12:12:28.723807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:52 cdw0:0 sqhd:007b p:0 m:0 dnr:0 00:26:41.618 [2024-07-25 12:12:28.732981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fda78 00:26:41.618 [2024-07-25 12:12:28.734174] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:41 nsid:1 lba:374 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.618 [2024-07-25 12:12:28.734193] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:41 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.618 [2024-07-25 12:12:28.742436] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e7818 00:26:41.618 [2024-07-25 12:12:28.742658] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:20339 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.618 [2024-07-25 12:12:28.742677] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:115 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:41.618 [2024-07-25 12:12:28.751793] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e7818 00:26:41.618 [2024-07-25 12:12:28.753584] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:24546 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.618 [2024-07-25 12:12:28.753603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:38 cdw0:0 sqhd:007c p:0 m:0 dnr:0 00:26:41.618 [2024-07-25 12:12:28.762246] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e8088 00:26:41.618 [2024-07-25 12:12:28.763188] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:1516 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.618 [2024-07-25 12:12:28.763206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:65 cdw0:0 sqhd:0078 p:0 m:0 dnr:0 00:26:41.618 [2024-07-25 12:12:28.771091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e84c0 00:26:41.618 [2024-07-25 12:12:28.771705] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:11020 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.618 [2024-07-25 12:12:28.771724] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:11 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:41.618 [2024-07-25 12:12:28.780141] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190eb760 00:26:41.618 [2024-07-25 12:12:28.780960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:94 nsid:1 lba:13706 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.618 [2024-07-25 12:12:28.780978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:94 cdw0:0 sqhd:005c p:0 m:0 dnr:0 00:26:41.618 [2024-07-25 12:12:28.789108] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fc998 00:26:41.618 [2024-07-25 12:12:28.790962] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:16862 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.618 [2024-07-25 12:12:28.790980] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:72 cdw0:0 sqhd:0053 p:0 m:0 dnr:0 00:26:41.618 [2024-07-25 12:12:28.801165] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190fef90 00:26:41.618 [2024-07-25 12:12:28.802454] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:24766 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.618 [2024-07-25 12:12:28.802472] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:53 cdw0:0 sqhd:006e p:0 m:0 dnr:0 00:26:41.618 [2024-07-25 12:12:28.810248] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f2510 00:26:41.618 [2024-07-25 12:12:28.810502] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:18138 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.618 [2024-07-25 12:12:28.810521] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.618 [2024-07-25 12:12:28.819644] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f2510 00:26:41.618 [2024-07-25 12:12:28.820161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22023 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.618 [2024-07-25 12:12:28.820179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.618 [2024-07-25 12:12:28.829097] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f2510 00:26:41.618 [2024-07-25 12:12:28.829326] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:25291 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.618 [2024-07-25 12:12:28.829344] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.618 [2024-07-25 12:12:28.838465] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f2510 00:26:41.618 [2024-07-25 12:12:28.840158] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:66 nsid:1 lba:1060 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.618 [2024-07-25 12:12:28.840177] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:66 cdw0:0 sqhd:007f p:0 m:0 dnr:0 00:26:41.618 [2024-07-25 12:12:28.851450] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190f2948 00:26:41.618 [2024-07-25 12:12:28.852910] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:103 nsid:1 lba:17007 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.618 [2024-07-25 12:12:28.852929] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:103 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:41.618 [2024-07-25 12:12:28.861550] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.618 [2024-07-25 12:12:28.861796] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:19779 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.618 [2024-07-25 12:12:28.861816] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.878 [2024-07-25 12:12:28.871065] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.878 [2024-07-25 12:12:28.871290] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13103 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.878 [2024-07-25 12:12:28.871310] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.878 [2024-07-25 12:12:28.880526] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.878 [2024-07-25 12:12:28.880751] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20784 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.878 [2024-07-25 12:12:28.880770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:28.889980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:28.890212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:17095 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:28.890232] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:28.899494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:28.899720] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:25224 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:28.899743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:28.909189] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:28.909413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13823 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:28.909432] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:28.918824] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:28.919053] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10979 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:28.919071] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:28.928297] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:28.928524] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14930 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:28.928543] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:28.937741] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:28.937974] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18958 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:28.937994] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:28.947315] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:28.947538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:2721 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:28.947556] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:28.956775] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:28.956999] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24624 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:28.957017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:28.966206] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:28.966430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:2794 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:28.966449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:28.975681] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:28.975907] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:23296 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:28.975926] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:28.985145] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:28.985376] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:16108 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:28.985394] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:28.994600] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:28.994822] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6647 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:28.994841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:29.004077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:29.004302] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6111 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:29.004320] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:29.013448] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:29.013679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14712 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:29.013697] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:29.022883] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:29.023111] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8483 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:29.023128] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:29.032341] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:29.032558] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:5385 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:29.032576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:29.041767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:29.041992] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13293 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:29.042010] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:29.051234] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:29.051460] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17533 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:29.051479] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:29.060678] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:29.060902] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22012 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:29.060920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:29.070131] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:29.070355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:9256 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:29.070374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:29.079616] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:29.079843] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:3907 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:29.079862] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:29.089063] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:29.089291] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:17420 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:29.089308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:29.098581] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:29.098803] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12957 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:29.098821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:29.107889] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:29.108113] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10759 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:29.108131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.879 [2024-07-25 12:12:29.117351] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.879 [2024-07-25 12:12:29.117573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14717 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.879 [2024-07-25 12:12:29.117592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:41.880 [2024-07-25 12:12:29.126955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:41.880 [2024-07-25 12:12:29.127192] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14091 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:41.880 [2024-07-25 12:12:29.127212] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.139 [2024-07-25 12:12:29.136646] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.139 [2024-07-25 12:12:29.136874] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:20124 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.139 [2024-07-25 12:12:29.136893] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.139 [2024-07-25 12:12:29.146089] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.139 [2024-07-25 12:12:29.146316] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:23366 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.139 [2024-07-25 12:12:29.146338] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.139 [2024-07-25 12:12:29.155743] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.139 [2024-07-25 12:12:29.155969] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:24490 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.139 [2024-07-25 12:12:29.155988] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.139 [2024-07-25 12:12:29.165166] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.139 [2024-07-25 12:12:29.165389] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10582 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.139 [2024-07-25 12:12:29.165408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.139 [2024-07-25 12:12:29.174613] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.139 [2024-07-25 12:12:29.174840] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:6655 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.139 [2024-07-25 12:12:29.174859] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.139 [2024-07-25 12:12:29.184087] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.139 [2024-07-25 12:12:29.184312] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21623 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.139 [2024-07-25 12:12:29.184331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.139 [2024-07-25 12:12:29.193543] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.139 [2024-07-25 12:12:29.193767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21062 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.139 [2024-07-25 12:12:29.193784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.139 [2024-07-25 12:12:29.203235] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.139 [2024-07-25 12:12:29.203462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:18869 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.139 [2024-07-25 12:12:29.203481] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.139 [2024-07-25 12:12:29.212951] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.139 [2024-07-25 12:12:29.213186] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3896 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.139 [2024-07-25 12:12:29.213205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.139 [2024-07-25 12:12:29.222718] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.139 [2024-07-25 12:12:29.222951] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10267 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.139 [2024-07-25 12:12:29.222971] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.139 [2024-07-25 12:12:29.232981] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.139 [2024-07-25 12:12:29.233241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8000 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.139 [2024-07-25 12:12:29.233261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.139 [2024-07-25 12:12:29.243649] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.139 [2024-07-25 12:12:29.243901] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:24547 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.139 [2024-07-25 12:12:29.243920] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.139 [2024-07-25 12:12:29.254302] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.139 [2024-07-25 12:12:29.254550] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:17713 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.139 [2024-07-25 12:12:29.254569] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.139 [2024-07-25 12:12:29.264978] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.139 [2024-07-25 12:12:29.265241] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:11428 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.139 [2024-07-25 12:12:29.265261] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.139 [2024-07-25 12:12:29.274963] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.139 [2024-07-25 12:12:29.275206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21493 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.139 [2024-07-25 12:12:29.275225] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.139 [2024-07-25 12:12:29.284707] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.139 [2024-07-25 12:12:29.284945] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:16008 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.139 [2024-07-25 12:12:29.284963] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.139 [2024-07-25 12:12:29.294329] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.140 [2024-07-25 12:12:29.294552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22844 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.140 [2024-07-25 12:12:29.294570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.140 [2024-07-25 12:12:29.303757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.140 [2024-07-25 12:12:29.303978] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:22132 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.140 [2024-07-25 12:12:29.303996] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.140 [2024-07-25 12:12:29.313239] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.140 [2024-07-25 12:12:29.313462] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16177 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.140 [2024-07-25 12:12:29.313480] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.140 [2024-07-25 12:12:29.322662] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.140 [2024-07-25 12:12:29.322891] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:14654 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.140 [2024-07-25 12:12:29.322910] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.140 [2024-07-25 12:12:29.332091] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.140 [2024-07-25 12:12:29.332322] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:4386 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.140 [2024-07-25 12:12:29.332342] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.140 [2024-07-25 12:12:29.341552] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.140 [2024-07-25 12:12:29.341777] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:475 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.140 [2024-07-25 12:12:29.341796] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.140 [2024-07-25 12:12:29.350952] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.140 [2024-07-25 12:12:29.351183] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6963 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.140 [2024-07-25 12:12:29.351202] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.140 [2024-07-25 12:12:29.360374] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.140 [2024-07-25 12:12:29.360602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:13967 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.140 [2024-07-25 12:12:29.360620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.140 [2024-07-25 12:12:29.369786] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.140 [2024-07-25 12:12:29.370006] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:10517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.140 [2024-07-25 12:12:29.370024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.140 [2024-07-25 12:12:29.379308] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.140 [2024-07-25 12:12:29.379535] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13595 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.140 [2024-07-25 12:12:29.379554] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.400 [2024-07-25 12:12:29.388757] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.400 [2024-07-25 12:12:29.388983] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:22441 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.400 [2024-07-25 12:12:29.389003] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.400 [2024-07-25 12:12:29.398243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.400 [2024-07-25 12:12:29.398471] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:22003 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.400 [2024-07-25 12:12:29.398492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.400 [2024-07-25 12:12:29.407685] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.400 [2024-07-25 12:12:29.407908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:1469 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.400 [2024-07-25 12:12:29.407927] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.400 [2024-07-25 12:12:29.417127] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.400 [2024-07-25 12:12:29.417356] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:24167 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.400 [2024-07-25 12:12:29.417375] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.400 [2024-07-25 12:12:29.426570] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.400 [2024-07-25 12:12:29.426794] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:4016 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.400 [2024-07-25 12:12:29.426813] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.400 [2024-07-25 12:12:29.435997] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.400 [2024-07-25 12:12:29.436229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20950 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.400 [2024-07-25 12:12:29.436249] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.400 [2024-07-25 12:12:29.445445] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.400 [2024-07-25 12:12:29.445666] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12845 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.400 [2024-07-25 12:12:29.445685] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.400 [2024-07-25 12:12:29.455050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.400 [2024-07-25 12:12:29.455273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:10719 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.400 [2024-07-25 12:12:29.455291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.400 [2024-07-25 12:12:29.464488] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.400 [2024-07-25 12:12:29.464710] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:21461 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.400 [2024-07-25 12:12:29.464729] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.400 [2024-07-25 12:12:29.473892] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.400 [2024-07-25 12:12:29.474118] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20879 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.400 [2024-07-25 12:12:29.474137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.400 [2024-07-25 12:12:29.483322] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.400 [2024-07-25 12:12:29.483552] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:8684 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.400 [2024-07-25 12:12:29.483570] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.400 [2024-07-25 12:12:29.492781] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.400 [2024-07-25 12:12:29.493004] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:20372 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.400 [2024-07-25 12:12:29.493023] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.400 [2024-07-25 12:12:29.502217] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.400 [2024-07-25 12:12:29.502442] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:11284 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.400 [2024-07-25 12:12:29.502460] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.400 [2024-07-25 12:12:29.511652] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.401 [2024-07-25 12:12:29.511876] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:3946 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.401 [2024-07-25 12:12:29.511894] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.401 [2024-07-25 12:12:29.521126] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.401 [2024-07-25 12:12:29.521347] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:18161 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.401 [2024-07-25 12:12:29.521365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.401 [2024-07-25 12:12:29.530549] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.401 [2024-07-25 12:12:29.530771] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:394 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.401 [2024-07-25 12:12:29.530790] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.401 [2024-07-25 12:12:29.539998] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.401 [2024-07-25 12:12:29.540229] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:23006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.401 [2024-07-25 12:12:29.540247] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.401 [2024-07-25 12:12:29.549456] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.401 [2024-07-25 12:12:29.549679] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:15006 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.401 [2024-07-25 12:12:29.549698] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.401 [2024-07-25 12:12:29.558885] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.401 [2024-07-25 12:12:29.559116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:12191 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.401 [2024-07-25 12:12:29.559135] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.401 [2024-07-25 12:12:29.568331] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.401 [2024-07-25 12:12:29.568556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:21936 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.401 [2024-07-25 12:12:29.568575] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.401 [2024-07-25 12:12:29.577748] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.401 [2024-07-25 12:12:29.577971] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:526 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.401 [2024-07-25 12:12:29.577990] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.401 [2024-07-25 12:12:29.587207] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.401 [2024-07-25 12:12:29.587430] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11607 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.401 [2024-07-25 12:12:29.587449] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.401 [2024-07-25 12:12:29.596656] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.401 [2024-07-25 12:12:29.596880] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13517 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.401 [2024-07-25 12:12:29.596898] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.401 [2024-07-25 12:12:29.606095] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.401 [2024-07-25 12:12:29.606317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16360 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.401 [2024-07-25 12:12:29.606334] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.401 [2024-07-25 12:12:29.615669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.401 [2024-07-25 12:12:29.615889] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13043 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.401 [2024-07-25 12:12:29.615908] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.401 [2024-07-25 12:12:29.625110] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.401 [2024-07-25 12:12:29.625333] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2731 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.401 [2024-07-25 12:12:29.625352] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.401 [2024-07-25 12:12:29.634425] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.401 [2024-07-25 12:12:29.634650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5055 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.401 [2024-07-25 12:12:29.634668] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.401 [2024-07-25 12:12:29.643869] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.401 [2024-07-25 12:12:29.644096] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:13317 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.401 [2024-07-25 12:12:29.644124] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.661 [2024-07-25 12:12:29.653353] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.661 [2024-07-25 12:12:29.653573] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8777 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.661 [2024-07-25 12:12:29.653592] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.661 [2024-07-25 12:12:29.662797] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.661 [2024-07-25 12:12:29.663024] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11431 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.663048] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.672233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.672458] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:19501 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.672477] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.681577] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.681800] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:966 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.681819] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.691035] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.691264] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:14142 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.691282] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.700446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.700670] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:20035 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.700687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.710001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.710244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13882 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.710263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.719446] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.719669] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:4319 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.719687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.728856] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.729085] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:13425 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.729103] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.738293] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.738520] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:8552 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.738539] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.747688] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.747915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:24700 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.747933] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.757169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.757392] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:6895 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.757410] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.766610] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.766838] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:8686 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.766856] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.776050] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.776272] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2289 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.776291] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.785502] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.785725] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:22010 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.785743] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.794980] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.795212] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:2442 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.795231] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.804419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.804646] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6961 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.804664] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.813838] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.814064] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:10918 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.814082] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.823320] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.823541] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:3112 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.823559] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.832767] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.832987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:16944 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.833006] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.842270] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.842484] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:6443 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.842502] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.851668] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.851895] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:11585 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.851914] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.861125] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.861353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:5357 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.861371] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.870582] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.870819] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:34 nsid:1 lba:3666 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.870838] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:34 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.880183] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.880409] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:16439 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.880427] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:45 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.889576] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.889797] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:2668 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.662 [2024-07-25 12:12:29.889818] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:84 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.662 [2024-07-25 12:12:29.899047] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151a420) with pdu=0x2000190e99d8 00:26:42.662 [2024-07-25 12:12:29.899269] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:13468 len:1 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:26:42.663 [2024-07-25 12:12:29.899287] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:78 cdw0:0 sqhd:007e p:0 m:0 dnr:0 00:26:42.663 00:26:42.663 Latency(us) 00:26:42.663 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:42.663 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 128, IO size: 4096) 00:26:42.663 nvme0n1 : 2.00 26369.36 103.01 0.00 0.00 4845.60 2664.18 30089.57 00:26:42.663 =================================================================================================================== 00:26:42.663 Total : 26369.36 103.01 0.00 0.00 4845.60 2664.18 30089.57 00:26:42.663 0 00:26:42.947 12:12:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:42.947 12:12:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:42.947 12:12:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:42.947 | .driver_specific 00:26:42.947 | .nvme_error 00:26:42.947 | .status_code 00:26:42.947 | .command_transient_transport_error' 00:26:42.947 12:12:29 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:42.947 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 207 > 0 )) 00:26:42.947 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 473853 00:26:42.947 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 473853 ']' 00:26:42.947 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 473853 00:26:42.947 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:42.947 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:42.947 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 473853 00:26:43.221 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:43.221 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:43.221 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 473853' 00:26:43.221 killing process with pid 473853 00:26:43.221 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 473853 00:26:43.221 Received shutdown signal, test time was about 2.000000 seconds 00:26:43.221 00:26:43.221 Latency(us) 00:26:43.221 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:43.221 =================================================================================================================== 00:26:43.221 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:43.221 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 473853 00:26:43.221 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@115 -- # run_bperf_err randwrite 131072 16 00:26:43.221 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@54 -- # local rw bs qd 00:26:43.221 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # rw=randwrite 00:26:43.221 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # bs=131072 00:26:43.221 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@56 -- # qd=16 00:26:43.221 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@58 -- # bperfpid=474383 00:26:43.221 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@60 -- # waitforlisten 474383 /var/tmp/bperf.sock 00:26:43.221 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@57 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -m 2 -r /var/tmp/bperf.sock -w randwrite -o 131072 -t 2 -q 16 -z 00:26:43.222 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@829 -- # '[' -z 474383 ']' 00:26:43.222 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:26:43.222 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:43.222 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:26:43.222 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:26:43.222 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:43.222 12:12:30 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:43.222 [2024-07-25 12:12:30.407332] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:26:43.222 [2024-07-25 12:12:30.407380] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid474383 ] 00:26:43.222 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:43.222 Zero copy mechanism will not be used. 00:26:43.222 EAL: No free 2048 kB hugepages reported on node 1 00:26:43.222 [2024-07-25 12:12:30.460491] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:43.481 [2024-07-25 12:12:30.541787] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:44.050 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:44.050 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@862 -- # return 0 00:26:44.050 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@61 -- # bperf_rpc bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:44.050 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_set_options --nvme-error-stat --bdev-retry-count -1 00:26:44.309 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@63 -- # rpc_cmd accel_error_inject_error -o crc32c -t disable 00:26:44.309 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.309 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:44.309 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.309 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@64 -- # bperf_rpc bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.309 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller --ddgst -t tcp -a 10.0.0.2 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode1 -b nvme0 00:26:44.568 nvme0n1 00:26:44.568 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@67 -- # rpc_cmd accel_error_inject_error -o crc32c -t corrupt -i 32 00:26:44.568 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:44.568 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:44.568 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:44.568 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@69 -- # bperf_py perform_tests 00:26:44.568 12:12:31 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@19 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:26:44.827 I/O size of 131072 is greater than zero copy threshold (65536). 00:26:44.827 Zero copy mechanism will not be used. 00:26:44.827 Running I/O for 2 seconds... 00:26:44.827 [2024-07-25 12:12:31.939449] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:44.827 [2024-07-25 12:12:31.940086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17568 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.827 [2024-07-25 12:12:31.940116] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.827 [2024-07-25 12:12:31.959985] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:44.827 [2024-07-25 12:12:31.960503] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:9024 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.827 [2024-07-25 12:12:31.960527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.827 [2024-07-25 12:12:31.982213] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:44.827 [2024-07-25 12:12:31.982939] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23840 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.827 [2024-07-25 12:12:31.982961] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:44.827 [2024-07-25 12:12:32.003964] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:44.827 [2024-07-25 12:12:32.004810] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.827 [2024-07-25 12:12:32.004830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:44.827 [2024-07-25 12:12:32.026107] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:44.827 [2024-07-25 12:12:32.026844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23616 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.827 [2024-07-25 12:12:32.026864] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:44.827 [2024-07-25 12:12:32.049899] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:44.827 [2024-07-25 12:12:32.050808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:15168 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.827 [2024-07-25 12:12:32.050828] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:44.827 [2024-07-25 12:12:32.073805] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:44.827 [2024-07-25 12:12:32.074672] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:44.827 [2024-07-25 12:12:32.074692] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.086 [2024-07-25 12:12:32.098498] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.086 [2024-07-25 12:12:32.099282] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.086 [2024-07-25 12:12:32.099301] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.086 [2024-07-25 12:12:32.122251] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.086 [2024-07-25 12:12:32.122908] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17696 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.086 [2024-07-25 12:12:32.122928] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.086 [2024-07-25 12:12:32.144134] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.086 [2024-07-25 12:12:32.144982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.086 [2024-07-25 12:12:32.145002] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.086 [2024-07-25 12:12:32.166955] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.086 [2024-07-25 12:12:32.167608] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.086 [2024-07-25 12:12:32.167629] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.086 [2024-07-25 12:12:32.191494] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.086 [2024-07-25 12:12:32.191737] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17856 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.086 [2024-07-25 12:12:32.191757] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.086 [2024-07-25 12:12:32.212821] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.086 [2024-07-25 12:12:32.213568] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10752 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.086 [2024-07-25 12:12:32.213588] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.086 [2024-07-25 12:12:32.235151] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.086 [2024-07-25 12:12:32.235881] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.086 [2024-07-25 12:12:32.235900] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.086 [2024-07-25 12:12:32.258732] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.086 [2024-07-25 12:12:32.259582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16352 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.087 [2024-07-25 12:12:32.259602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.087 [2024-07-25 12:12:32.282309] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.087 [2024-07-25 12:12:32.283139] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11520 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.087 [2024-07-25 12:12:32.283158] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.087 [2024-07-25 12:12:32.302669] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.087 [2024-07-25 12:12:32.303224] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.087 [2024-07-25 12:12:32.303243] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.087 [2024-07-25 12:12:32.323250] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.087 [2024-07-25 12:12:32.324055] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.087 [2024-07-25 12:12:32.324075] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.346 [2024-07-25 12:12:32.346497] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.346 [2024-07-25 12:12:32.347321] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:19712 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.346 [2024-07-25 12:12:32.347341] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.346 [2024-07-25 12:12:32.370169] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.346 [2024-07-25 12:12:32.370870] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.346 [2024-07-25 12:12:32.370890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.346 [2024-07-25 12:12:32.393745] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.346 [2024-07-25 12:12:32.394700] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:1472 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.346 [2024-07-25 12:12:32.394721] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.346 [2024-07-25 12:12:32.427929] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.346 [2024-07-25 12:12:32.428731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.346 [2024-07-25 12:12:32.428751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.346 [2024-07-25 12:12:32.459232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.346 [2024-07-25 12:12:32.459982] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.346 [2024-07-25 12:12:32.460001] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.346 [2024-07-25 12:12:32.487279] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.346 [2024-07-25 12:12:32.488121] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:32 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.346 [2024-07-25 12:12:32.488147] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.346 [2024-07-25 12:12:32.518379] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.346 [2024-07-25 12:12:32.519017] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22688 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.346 [2024-07-25 12:12:32.519041] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.346 [2024-07-25 12:12:32.542036] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.346 [2024-07-25 12:12:32.542582] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20256 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.346 [2024-07-25 12:12:32.542602] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.346 [2024-07-25 12:12:32.564058] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.346 [2024-07-25 12:12:32.564886] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13920 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.346 [2024-07-25 12:12:32.564905] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.346 [2024-07-25 12:12:32.586744] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.346 [2024-07-25 12:12:32.587652] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.346 [2024-07-25 12:12:32.587671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.606 [2024-07-25 12:12:32.608077] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.606 [2024-07-25 12:12:32.608726] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.606 [2024-07-25 12:12:32.608746] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.606 [2024-07-25 12:12:32.631906] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.606 [2024-07-25 12:12:32.632738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.606 [2024-07-25 12:12:32.632758] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.606 [2024-07-25 12:12:32.655037] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.606 [2024-07-25 12:12:32.655602] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20576 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.606 [2024-07-25 12:12:32.655622] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.606 [2024-07-25 12:12:32.677001] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.606 [2024-07-25 12:12:32.677920] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:14144 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.606 [2024-07-25 12:12:32.677940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.606 [2024-07-25 12:12:32.701687] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.606 [2024-07-25 12:12:32.702526] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:704 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.606 [2024-07-25 12:12:32.702545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.606 [2024-07-25 12:12:32.725830] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.606 [2024-07-25 12:12:32.726630] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17728 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.606 [2024-07-25 12:12:32.726649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.606 [2024-07-25 12:12:32.750891] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.606 [2024-07-25 12:12:32.751555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4096 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.606 [2024-07-25 12:12:32.751576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.606 [2024-07-25 12:12:32.775241] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.606 [2024-07-25 12:12:32.775850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3936 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.606 [2024-07-25 12:12:32.775869] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.606 [2024-07-25 12:12:32.800106] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.606 [2024-07-25 12:12:32.800769] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18912 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.606 [2024-07-25 12:12:32.800788] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.606 [2024-07-25 12:12:32.823501] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.606 [2024-07-25 12:12:32.824339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6208 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.606 [2024-07-25 12:12:32.824359] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.607 [2024-07-25 12:12:32.846291] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.607 [2024-07-25 12:12:32.847103] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.607 [2024-07-25 12:12:32.847123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.866 [2024-07-25 12:12:32.871232] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.866 [2024-07-25 12:12:32.872162] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.866 [2024-07-25 12:12:32.872182] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.866 [2024-07-25 12:12:32.902622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.866 [2024-07-25 12:12:32.903275] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24992 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.866 [2024-07-25 12:12:32.903294] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.866 [2024-07-25 12:12:32.929005] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.866 [2024-07-25 12:12:32.929519] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:20192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.866 [2024-07-25 12:12:32.929538] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.866 [2024-07-25 12:12:32.952435] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.866 [2024-07-25 12:12:32.953098] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18784 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.866 [2024-07-25 12:12:32.953118] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.866 [2024-07-25 12:12:32.974495] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.866 [2024-07-25 12:12:32.975244] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:18304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.866 [2024-07-25 12:12:32.975265] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.866 [2024-07-25 12:12:32.996842] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.866 [2024-07-25 12:12:32.997440] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.866 [2024-07-25 12:12:32.997459] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:45.866 [2024-07-25 12:12:33.027563] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.866 [2024-07-25 12:12:33.028466] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16384 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.866 [2024-07-25 12:12:33.028485] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:45.866 [2024-07-25 12:12:33.053536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.866 [2024-07-25 12:12:33.054354] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16672 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.866 [2024-07-25 12:12:33.054374] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:45.866 [2024-07-25 12:12:33.085910] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.866 [2024-07-25 12:12:33.086538] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:23680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.867 [2024-07-25 12:12:33.086557] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:45.867 [2024-07-25 12:12:33.107996] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:45.867 [2024-07-25 12:12:33.108746] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:21792 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:45.867 [2024-07-25 12:12:33.108765] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.126 [2024-07-25 12:12:33.129178] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.126 [2024-07-25 12:12:33.129824] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:960 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.126 [2024-07-25 12:12:33.129844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.126 [2024-07-25 12:12:33.153170] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.126 [2024-07-25 12:12:33.153896] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:11808 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.126 [2024-07-25 12:12:33.153916] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.126 [2024-07-25 12:12:33.176802] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.126 [2024-07-25 12:12:33.177500] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7328 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.126 [2024-07-25 12:12:33.177520] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.126 [2024-07-25 12:12:33.200776] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.126 [2024-07-25 12:12:33.201659] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:6304 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.126 [2024-07-25 12:12:33.201678] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.126 [2024-07-25 12:12:33.226376] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.126 [2024-07-25 12:12:33.227311] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17536 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.126 [2024-07-25 12:12:33.227331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.126 [2024-07-25 12:12:33.249622] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.126 [2024-07-25 12:12:33.250553] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:24160 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.126 [2024-07-25 12:12:33.250573] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.126 [2024-07-25 12:12:33.273381] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.126 [2024-07-25 12:12:33.274207] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16736 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.126 [2024-07-25 12:12:33.274227] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.126 [2024-07-25 12:12:33.295598] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.126 [2024-07-25 12:12:33.296288] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10336 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.126 [2024-07-25 12:12:33.296308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.126 [2024-07-25 12:12:33.317792] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.126 [2024-07-25 12:12:33.318388] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16128 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.126 [2024-07-25 12:12:33.318408] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.126 [2024-07-25 12:12:33.339197] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.126 [2024-07-25 12:12:33.340141] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:2432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.126 [2024-07-25 12:12:33.340161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.126 [2024-07-25 12:12:33.363633] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.126 [2024-07-25 12:12:33.364413] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13888 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.126 [2024-07-25 12:12:33.364433] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.384 [2024-07-25 12:12:33.385146] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.384 [2024-07-25 12:12:33.385831] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4000 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.384 [2024-07-25 12:12:33.385851] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.384 [2024-07-25 12:12:33.408908] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.384 [2024-07-25 12:12:33.409697] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17824 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.384 [2024-07-25 12:12:33.409717] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.384 [2024-07-25 12:12:33.430783] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.384 [2024-07-25 12:12:33.431473] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7680 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.384 [2024-07-25 12:12:33.431494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.384 [2024-07-25 12:12:33.453388] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.384 [2024-07-25 12:12:33.454154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13408 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.384 [2024-07-25 12:12:33.454174] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.384 [2024-07-25 12:12:33.472630] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.384 [2024-07-25 12:12:33.473149] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:7968 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.384 [2024-07-25 12:12:33.473170] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.384 [2024-07-25 12:12:33.492243] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.384 [2024-07-25 12:12:33.492884] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22656 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.384 [2024-07-25 12:12:33.492904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.384 [2024-07-25 12:12:33.512851] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.384 [2024-07-25 12:12:33.513344] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16192 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.384 [2024-07-25 12:12:33.513365] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.384 [2024-07-25 12:12:33.534398] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.384 [2024-07-25 12:12:33.535097] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13248 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.384 [2024-07-25 12:12:33.535121] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.384 [2024-07-25 12:12:33.554956] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.384 [2024-07-25 12:12:33.555731] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16864 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.384 [2024-07-25 12:12:33.555751] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.384 [2024-07-25 12:12:33.577710] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.384 [2024-07-25 12:12:33.578402] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17984 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.384 [2024-07-25 12:12:33.578421] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.384 [2024-07-25 12:12:33.600694] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.384 [2024-07-25 12:12:33.601475] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.384 [2024-07-25 12:12:33.601494] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.384 [2024-07-25 12:12:33.623973] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.384 [2024-07-25 12:12:33.624808] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3392 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.384 [2024-07-25 12:12:33.624827] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.642 [2024-07-25 12:12:33.646536] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.642 [2024-07-25 12:12:33.647107] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:4288 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.642 [2024-07-25 12:12:33.647127] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.642 [2024-07-25 12:12:33.669482] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.642 [2024-07-25 12:12:33.670243] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:22432 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.642 [2024-07-25 12:12:33.670263] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.642 [2024-07-25 12:12:33.693233] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.642 [2024-07-25 12:12:33.694060] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:10976 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.642 [2024-07-25 12:12:33.694079] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.642 [2024-07-25 12:12:33.716419] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.642 [2024-07-25 12:12:33.717112] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:12448 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.642 [2024-07-25 12:12:33.717131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.642 [2024-07-25 12:12:33.738117] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.642 [2024-07-25 12:12:33.738903] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:16064 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.642 [2024-07-25 12:12:33.738922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.642 [2024-07-25 12:12:33.759729] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.642 [2024-07-25 12:12:33.760226] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:8832 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.642 [2024-07-25 12:12:33.760244] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.642 [2024-07-25 12:12:33.779820] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.642 [2024-07-25 12:12:33.780305] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:3776 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.642 [2024-07-25 12:12:33.780324] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.642 [2024-07-25 12:12:33.800948] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.642 [2024-07-25 12:12:33.801543] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5952 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.642 [2024-07-25 12:12:33.801561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.642 [2024-07-25 12:12:33.821491] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.642 [2024-07-25 12:12:33.822182] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:5312 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.642 [2024-07-25 12:12:33.822201] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0021 p:0 m:0 dnr:0 00:26:46.642 [2024-07-25 12:12:33.845345] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.642 [2024-07-25 12:12:33.846014] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13216 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.642 [2024-07-25 12:12:33.846033] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0041 p:0 m:0 dnr:0 00:26:46.642 [2024-07-25 12:12:33.868676] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.642 [2024-07-25 12:12:33.869280] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:17280 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.642 [2024-07-25 12:12:33.869300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0061 p:0 m:0 dnr:0 00:26:46.642 [2024-07-25 12:12:33.890343] tcp.c:2113:data_crc32_calc_done: *ERROR*: Data digest error on tqpair=(0x151c0a0) with pdu=0x2000190fef90 00:26:46.642 [2024-07-25 12:12:33.890921] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:13440 len:32 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:26:46.642 [2024-07-25 12:12:33.890940] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMMAND TRANSIENT TRANSPORT ERROR (00/22) qid:1 cid:15 cdw0:0 sqhd:0001 p:0 m:0 dnr:0 00:26:46.899 00:26:46.899 Latency(us) 00:26:46.899 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:46.899 Job: nvme0n1 (Core Mask 0x2, workload: randwrite, depth: 16, IO size: 131072) 00:26:46.899 nvme0n1 : 2.01 1309.67 163.71 0.00 0.00 12186.12 8719.14 36016.31 00:26:46.899 =================================================================================================================== 00:26:46.899 Total : 1309.67 163.71 0.00 0.00 12186.12 8719.14 36016.31 00:26:46.899 0 00:26:46.899 12:12:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # get_transient_errcount nvme0n1 00:26:46.899 12:12:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@27 -- # bperf_rpc bdev_get_iostat -b nvme0n1 00:26:46.899 12:12:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@28 -- # jq -r '.bdevs[0] 00:26:46.899 | .driver_specific 00:26:46.899 | .nvme_error 00:26:46.899 | .status_code 00:26:46.899 | .command_transient_transport_error' 00:26:46.899 12:12:33 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@18 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_get_iostat -b nvme0n1 00:26:46.899 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@71 -- # (( 84 > 0 )) 00:26:46.899 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@73 -- # killprocess 474383 00:26:46.899 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 474383 ']' 00:26:46.899 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 474383 00:26:46.899 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:46.899 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:46.899 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 474383 00:26:47.159 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:26:47.159 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:26:47.159 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 474383' 00:26:47.159 killing process with pid 474383 00:26:47.159 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 474383 00:26:47.159 Received shutdown signal, test time was about 2.000000 seconds 00:26:47.159 00:26:47.159 Latency(us) 00:26:47.159 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:47.159 =================================================================================================================== 00:26:47.159 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:26:47.159 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 474383 00:26:47.159 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- host/digest.sh@116 -- # killprocess 472273 00:26:47.159 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@948 -- # '[' -z 472273 ']' 00:26:47.159 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@952 -- # kill -0 472273 00:26:47.159 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # uname 00:26:47.159 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:26:47.159 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 472273 00:26:47.159 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:26:47.159 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:26:47.159 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@966 -- # echo 'killing process with pid 472273' 00:26:47.159 killing process with pid 472273 00:26:47.159 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@967 -- # kill 472273 00:26:47.159 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@972 -- # wait 472273 00:26:47.419 00:26:47.419 real 0m17.036s 00:26:47.419 user 0m33.801s 00:26:47.419 sys 0m3.434s 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest.nvmf_digest_error -- common/autotest_common.sh@10 -- # set +x 00:26:47.419 ************************************ 00:26:47.419 END TEST nvmf_digest_error 00:26:47.419 ************************************ 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1142 -- # return 0 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@149 -- # trap - SIGINT SIGTERM EXIT 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- host/digest.sh@150 -- # nvmftestfini 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@488 -- # nvmfcleanup 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@117 -- # sync 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@120 -- # set +e 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@121 -- # for i in {1..20} 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:26:47.419 rmmod nvme_tcp 00:26:47.419 rmmod nvme_fabrics 00:26:47.419 rmmod nvme_keyring 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@124 -- # set -e 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@125 -- # return 0 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@489 -- # '[' -n 472273 ']' 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@490 -- # killprocess 472273 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@948 -- # '[' -z 472273 ']' 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@952 -- # kill -0 472273 00:26:47.419 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (472273) - No such process 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@975 -- # echo 'Process with pid 472273 is not found' 00:26:47.419 Process with pid 472273 is not found 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@278 -- # remove_spdk_ns 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:47.419 12:12:34 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_digest -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:26:49.958 00:26:49.958 real 0m42.118s 00:26:49.958 user 1m9.774s 00:26:49.958 sys 0m10.953s 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_digest -- common/autotest_common.sh@10 -- # set +x 00:26:49.958 ************************************ 00:26:49.958 END TEST nvmf_digest 00:26:49.958 ************************************ 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@36 -- # [[ 0 -eq 1 ]] 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@41 -- # [[ 0 -eq 1 ]] 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@46 -- # [[ phy == phy ]] 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@47 -- # run_test nvmf_bdevperf /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:26:49.958 ************************************ 00:26:49.958 START TEST nvmf_bdevperf 00:26:49.958 ************************************ 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh --transport=tcp 00:26:49.958 * Looking for test storage... 00:26:49.958 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # uname -s 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@5 -- # export PATH 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@47 -- # : 0 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@51 -- # have_pci_nics=0 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@11 -- # MALLOC_BDEV_SIZE=64 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@12 -- # MALLOC_BLOCK_SIZE=512 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@24 -- # nvmftestinit 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@448 -- # prepare_net_devs 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@410 -- # local -g is_hw=no 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@412 -- # remove_spdk_ns 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@285 -- # xtrace_disable 00:26:49.958 12:12:36 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # pci_devs=() 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@291 -- # local -a pci_devs 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # pci_net_devs=() 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # pci_drivers=() 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@293 -- # local -A pci_drivers 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # net_devs=() 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@295 -- # local -ga net_devs 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # e810=() 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@296 -- # local -ga e810 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # x722=() 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@297 -- # local -ga x722 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # mlx=() 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@298 -- # local -ga mlx 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:26:55.234 Found 0000:86:00.0 (0x8086 - 0x159b) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:26:55.234 Found 0000:86:00.1 (0x8086 - 0x159b) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:26:55.234 Found net devices under 0000:86:00.0: cvl_0_0 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@390 -- # [[ up == up ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:26:55.234 Found net devices under 0000:86:00.1: cvl_0_1 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@414 -- # is_hw=yes 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:26:55.234 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:26:55.234 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.219 ms 00:26:55.234 00:26:55.234 --- 10.0.0.2 ping statistics --- 00:26:55.234 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.234 rtt min/avg/max/mdev = 0.219/0.219/0.219/0.000 ms 00:26:55.234 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:26:55.234 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:26:55.234 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.280 ms 00:26:55.235 00:26:55.235 --- 10.0.0.1 ping statistics --- 00:26:55.235 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:26:55.235 rtt min/avg/max/mdev = 0.280/0.280/0.280/0.000 ms 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@422 -- # return 0 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@25 -- # tgt_init 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=478447 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 478447 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 478447 ']' 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:26:55.235 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:26:55.235 12:12:41 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.235 [2024-07-25 12:12:41.933454] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:26:55.235 [2024-07-25 12:12:41.933499] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:26:55.235 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.235 [2024-07-25 12:12:41.994664] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:26:55.235 [2024-07-25 12:12:42.069255] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:26:55.235 [2024-07-25 12:12:42.069300] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:26:55.235 [2024-07-25 12:12:42.069308] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:26:55.235 [2024-07-25 12:12:42.069314] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:26:55.235 [2024-07-25 12:12:42.069319] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:26:55.235 [2024-07-25 12:12:42.069420] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:26:55.235 [2024-07-25 12:12:42.069509] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:26:55.235 [2024-07-25 12:12:42.069511] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.805 [2024-07-25 12:12:42.790001] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.805 Malloc0 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:26:55.805 [2024-07-25 12:12:42.855541] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/62 -q 128 -o 4096 -w verify -t 1 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@27 -- # gen_nvmf_target_json 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:55.805 { 00:26:55.805 "params": { 00:26:55.805 "name": "Nvme$subsystem", 00:26:55.805 "trtype": "$TEST_TRANSPORT", 00:26:55.805 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:55.805 "adrfam": "ipv4", 00:26:55.805 "trsvcid": "$NVMF_PORT", 00:26:55.805 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:55.805 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:55.805 "hdgst": ${hdgst:-false}, 00:26:55.805 "ddgst": ${ddgst:-false} 00:26:55.805 }, 00:26:55.805 "method": "bdev_nvme_attach_controller" 00:26:55.805 } 00:26:55.805 EOF 00:26:55.805 )") 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:55.805 12:12:42 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:55.805 "params": { 00:26:55.805 "name": "Nvme1", 00:26:55.805 "trtype": "tcp", 00:26:55.805 "traddr": "10.0.0.2", 00:26:55.805 "adrfam": "ipv4", 00:26:55.805 "trsvcid": "4420", 00:26:55.805 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:55.805 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:55.805 "hdgst": false, 00:26:55.805 "ddgst": false 00:26:55.805 }, 00:26:55.805 "method": "bdev_nvme_attach_controller" 00:26:55.805 }' 00:26:55.805 [2024-07-25 12:12:42.907206] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:26:55.805 [2024-07-25 12:12:42.907250] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478596 ] 00:26:55.805 EAL: No free 2048 kB hugepages reported on node 1 00:26:55.805 [2024-07-25 12:12:42.960825] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:55.805 [2024-07-25 12:12:43.035341] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:56.065 Running I/O for 1 seconds... 00:26:57.002 00:26:57.002 Latency(us) 00:26:57.002 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:26:57.002 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:26:57.002 Verification LBA range: start 0x0 length 0x4000 00:26:57.002 Nvme1n1 : 1.01 10146.33 39.63 0.00 0.00 12567.90 2664.18 23023.08 00:26:57.002 =================================================================================================================== 00:26:57.002 Total : 10146.33 39.63 0.00 0.00 12567.90 2664.18 23023.08 00:26:57.262 12:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@30 -- # bdevperfpid=478833 00:26:57.262 12:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@32 -- # sleep 3 00:26:57.262 12:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf --json /dev/fd/63 -q 128 -o 4096 -w verify -t 15 -f 00:26:57.262 12:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@29 -- # gen_nvmf_target_json 00:26:57.262 12:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # config=() 00:26:57.262 12:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@532 -- # local subsystem config 00:26:57.262 12:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:26:57.262 12:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:26:57.262 { 00:26:57.262 "params": { 00:26:57.262 "name": "Nvme$subsystem", 00:26:57.262 "trtype": "$TEST_TRANSPORT", 00:26:57.262 "traddr": "$NVMF_FIRST_TARGET_IP", 00:26:57.262 "adrfam": "ipv4", 00:26:57.262 "trsvcid": "$NVMF_PORT", 00:26:57.262 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:26:57.262 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:26:57.262 "hdgst": ${hdgst:-false}, 00:26:57.262 "ddgst": ${ddgst:-false} 00:26:57.262 }, 00:26:57.262 "method": "bdev_nvme_attach_controller" 00:26:57.262 } 00:26:57.262 EOF 00:26:57.262 )") 00:26:57.262 12:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@554 -- # cat 00:26:57.262 12:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@556 -- # jq . 00:26:57.262 12:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@557 -- # IFS=, 00:26:57.262 12:12:44 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:26:57.262 "params": { 00:26:57.262 "name": "Nvme1", 00:26:57.262 "trtype": "tcp", 00:26:57.262 "traddr": "10.0.0.2", 00:26:57.262 "adrfam": "ipv4", 00:26:57.262 "trsvcid": "4420", 00:26:57.262 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:26:57.262 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:26:57.262 "hdgst": false, 00:26:57.262 "ddgst": false 00:26:57.262 }, 00:26:57.262 "method": "bdev_nvme_attach_controller" 00:26:57.262 }' 00:26:57.262 [2024-07-25 12:12:44.474684] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:26:57.262 [2024-07-25 12:12:44.474738] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid478833 ] 00:26:57.262 EAL: No free 2048 kB hugepages reported on node 1 00:26:57.523 [2024-07-25 12:12:44.528252] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:57.523 [2024-07-25 12:12:44.598142] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:26:57.783 Running I/O for 15 seconds... 00:27:00.326 12:12:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@33 -- # kill -9 478447 00:27:00.326 12:12:47 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@35 -- # sleep 3 00:27:00.326 [2024-07-25 12:12:47.454072] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:31 nsid:1 lba:105064 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454137] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454155] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:41 nsid:1 lba:105072 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454177] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:96 nsid:1 lba:105080 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454186] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454195] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:18 nsid:1 lba:105088 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454205] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454214] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:6 nsid:1 lba:105096 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454223] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454234] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:94 nsid:1 lba:105104 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454242] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:69 nsid:1 lba:105112 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454262] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454273] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:86 nsid:1 lba:105120 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454289] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454299] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:90 nsid:1 lba:105128 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454308] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454323] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:51 nsid:1 lba:105136 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454331] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454342] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:102 nsid:1 lba:105144 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454349] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454359] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:66 nsid:1 lba:105152 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454368] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454378] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:47 nsid:1 lba:105160 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454388] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:34 nsid:1 lba:105168 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454409] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454422] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:44 nsid:1 lba:105176 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454434] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454444] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:113 nsid:1 lba:105184 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454465] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:4 nsid:1 lba:105192 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454473] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454490] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:101 nsid:1 lba:105200 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454497] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454507] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:122 nsid:1 lba:105208 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454515] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454523] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:89 nsid:1 lba:105216 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454539] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:62 nsid:1 lba:105224 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454548] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:88 nsid:1 lba:105232 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454565] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454574] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:16 nsid:1 lba:105240 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454582] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454593] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:9 nsid:1 lba:105248 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454603] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454616] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:73 nsid:1 lba:105256 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454628] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454641] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:75 nsid:1 lba:105264 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454662] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:79 nsid:1 lba:105272 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454671] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454684] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:24 nsid:1 lba:105280 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454694] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454706] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:42 nsid:1 lba:105288 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454714] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454727] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:110 nsid:1 lba:105296 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454734] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454748] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:76 nsid:1 lba:105304 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454770] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:46 nsid:1 lba:105312 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.326 [2024-07-25 12:12:47.454780] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454792] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:97 nsid:1 lba:105504 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.326 [2024-07-25 12:12:47.454803] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454820] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:13 nsid:1 lba:105512 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.326 [2024-07-25 12:12:47.454830] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454844] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:22 nsid:1 lba:105520 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.326 [2024-07-25 12:12:47.454854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:54 nsid:1 lba:105528 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.326 [2024-07-25 12:12:47.454874] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.326 [2024-07-25 12:12:47.454883] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:11 nsid:1 lba:105536 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.326 [2024-07-25 12:12:47.454890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.454900] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:29 nsid:1 lba:105544 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.454907] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.454915] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:57 nsid:1 lba:105552 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.454922] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.454930] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:124 nsid:1 lba:105560 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.454937] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.454946] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:14 nsid:1 lba:105568 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.454952] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.454960] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:0 nsid:1 lba:105576 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.454967] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.454976] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:17 nsid:1 lba:105584 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.454982] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.454991] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:20 nsid:1 lba:105592 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.454998] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455007] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:39 nsid:1 lba:105600 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.455013] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455022] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:26 nsid:1 lba:105608 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.455028] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455037] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:74 nsid:1 lba:105616 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.455052] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455063] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:52 nsid:1 lba:105624 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.455069] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455077] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:117 nsid:1 lba:105632 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.455084] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455093] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:43 nsid:1 lba:105640 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.455100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455108] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:77 nsid:1 lba:105648 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.455115] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455124] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:35 nsid:1 lba:105656 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.455131] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:15 nsid:1 lba:105664 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.455146] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455154] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:60 nsid:1 lba:105672 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.455161] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455169] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:12 nsid:1 lba:105680 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.455176] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:107 nsid:1 lba:105688 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.455192] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455200] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:98 nsid:1 lba:105696 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.455206] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455215] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:38 nsid:1 lba:105704 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.455222] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455230] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:36 nsid:1 lba:105712 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.455237] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:121 nsid:1 lba:105720 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.455254] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455263] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:87 nsid:1 lba:105728 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.327 [2024-07-25 12:12:47.455269] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455277] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:125 nsid:1 lba:105320 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.327 [2024-07-25 12:12:47.455285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455292] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:85 nsid:1 lba:105328 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.327 [2024-07-25 12:12:47.455300] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455308] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:49 nsid:1 lba:105336 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.327 [2024-07-25 12:12:47.455316] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455324] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:30 nsid:1 lba:105344 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.327 [2024-07-25 12:12:47.455330] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455339] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:126 nsid:1 lba:105352 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.327 [2024-07-25 12:12:47.455347] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455355] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:28 nsid:1 lba:105360 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.327 [2024-07-25 12:12:47.455362] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455370] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:120 nsid:1 lba:105368 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.327 [2024-07-25 12:12:47.455377] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455385] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:1 nsid:1 lba:105376 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.327 [2024-07-25 12:12:47.455392] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455400] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:100 nsid:1 lba:105384 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.327 [2024-07-25 12:12:47.455407] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455416] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:91 nsid:1 lba:105392 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.327 [2024-07-25 12:12:47.455422] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455431] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:81 nsid:1 lba:105400 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.327 [2024-07-25 12:12:47.455437] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455447] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:55 nsid:1 lba:105408 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.327 [2024-07-25 12:12:47.455455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455463] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:68 nsid:1 lba:105416 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.327 [2024-07-25 12:12:47.455470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455479] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:59 nsid:1 lba:105424 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.327 [2024-07-25 12:12:47.455486] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455494] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:116 nsid:1 lba:105432 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.327 [2024-07-25 12:12:47.455501] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.327 [2024-07-25 12:12:47.455510] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:106 nsid:1 lba:105440 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.328 [2024-07-25 12:12:47.455517] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455525] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:84 nsid:1 lba:105736 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455531] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455540] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:109 nsid:1 lba:105744 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455547] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455556] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:2 nsid:1 lba:105752 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455563] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455571] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:118 nsid:1 lba:105760 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455578] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455589] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:48 nsid:1 lba:105768 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455595] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455603] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:27 nsid:1 lba:105776 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455611] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455619] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:114 nsid:1 lba:105784 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455626] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455634] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:70 nsid:1 lba:105792 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455642] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455650] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:53 nsid:1 lba:105800 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455656] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455664] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:64 nsid:1 lba:105808 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455672] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455680] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:71 nsid:1 lba:105816 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455687] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455695] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:61 nsid:1 lba:105824 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455701] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455709] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:108 nsid:1 lba:105832 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455715] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455724] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:65 nsid:1 lba:105840 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455730] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455738] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:32 nsid:1 lba:105848 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455745] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455753] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:50 nsid:1 lba:105856 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455767] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:93 nsid:1 lba:105864 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455774] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455783] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:99 nsid:1 lba:105872 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455789] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455799] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:104 nsid:1 lba:105880 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455806] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455814] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:115 nsid:1 lba:105888 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455821] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455834] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:25 nsid:1 lba:105896 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455841] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455850] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:21 nsid:1 lba:105904 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455857] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455865] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:8 nsid:1 lba:105912 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455873] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455882] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:58 nsid:1 lba:105920 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455888] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455897] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:67 nsid:1 lba:105928 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455904] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455912] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:40 nsid:1 lba:105936 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455918] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455927] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:82 nsid:1 lba:105944 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455934] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455942] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:10 nsid:1 lba:105952 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455949] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455957] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:45 nsid:1 lba:105960 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455964] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455972] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:83 nsid:1 lba:105968 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455978] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.455987] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:78 nsid:1 lba:105976 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.455993] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.456002] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:123 nsid:1 lba:105984 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.456008] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.456016] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:80 nsid:1 lba:105992 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.456024] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.456032] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:19 nsid:1 lba:106000 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.456039] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.456054] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:7 nsid:1 lba:106008 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.456061] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.456070] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:23 nsid:1 lba:106016 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.456077] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.456086] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:105 nsid:1 lba:106024 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.456093] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.456101] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:56 nsid:1 lba:106032 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.328 [2024-07-25 12:12:47.456108] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.328 [2024-07-25 12:12:47.456116] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:72 nsid:1 lba:106040 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.329 [2024-07-25 12:12:47.456123] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.329 [2024-07-25 12:12:47.456131] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:5 nsid:1 lba:106048 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.329 [2024-07-25 12:12:47.456138] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.329 [2024-07-25 12:12:47.456146] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:119 nsid:1 lba:106056 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.329 [2024-07-25 12:12:47.456152] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.329 [2024-07-25 12:12:47.456161] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:3 nsid:1 lba:106064 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.329 [2024-07-25 12:12:47.456168] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.329 [2024-07-25 12:12:47.456176] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:92 nsid:1 lba:106072 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.329 [2024-07-25 12:12:47.456183] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.329 [2024-07-25 12:12:47.456191] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: WRITE sqid:1 cid:112 nsid:1 lba:106080 len:8 SGL DATA BLOCK OFFSET 0x0 len:0x1000 00:27:00.329 [2024-07-25 12:12:47.456197] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.329 [2024-07-25 12:12:47.456206] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:63 nsid:1 lba:105448 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.329 [2024-07-25 12:12:47.456214] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.329 [2024-07-25 12:12:47.456222] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:111 nsid:1 lba:105456 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.329 [2024-07-25 12:12:47.456230] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.329 [2024-07-25 12:12:47.456239] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:33 nsid:1 lba:105464 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.329 [2024-07-25 12:12:47.456245] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.329 [2024-07-25 12:12:47.456253] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:95 nsid:1 lba:105472 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.329 [2024-07-25 12:12:47.456260] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.329 [2024-07-25 12:12:47.456268] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:103 nsid:1 lba:105480 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.329 [2024-07-25 12:12:47.456275] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.329 [2024-07-25 12:12:47.456284] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:37 nsid:1 lba:105488 len:8 SGL TRANSPORT DATA BLOCK TRANSPORT 0x0 00:27:00.329 [2024-07-25 12:12:47.456290] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.329 [2024-07-25 12:12:47.456298] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x1165ee0 is same with the state(5) to be set 00:27:00.329 [2024-07-25 12:12:47.456306] nvme_qpair.c: 579:nvme_qpair_abort_queued_reqs: *ERROR*: aborting queued i/o 00:27:00.329 [2024-07-25 12:12:47.456311] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:27:00.329 [2024-07-25 12:12:47.456317] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: READ sqid:1 cid:0 nsid:1 lba:105496 len:8 PRP1 0x0 PRP2 0x0 00:27:00.329 [2024-07-25 12:12:47.456325] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - SQ DELETION (00/08) qid:1 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:27:00.329 [2024-07-25 12:12:47.456368] bdev_nvme.c:1612:bdev_nvme_disconnected_qpair_cb: *NOTICE*: qpair 0x1165ee0 was disconnected and freed. reset controller. 00:27:00.329 [2024-07-25 12:12:47.459219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.329 [2024-07-25 12:12:47.459271] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.329 [2024-07-25 12:12:47.460132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.329 [2024-07-25 12:12:47.460176] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.329 [2024-07-25 12:12:47.460198] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.329 [2024-07-25 12:12:47.460658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.329 [2024-07-25 12:12:47.460838] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.329 [2024-07-25 12:12:47.460846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.329 [2024-07-25 12:12:47.460854] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.329 [2024-07-25 12:12:47.463702] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.329 [2024-07-25 12:12:47.472590] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.329 [2024-07-25 12:12:47.473205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.329 [2024-07-25 12:12:47.473251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.329 [2024-07-25 12:12:47.473282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.329 [2024-07-25 12:12:47.473862] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.329 [2024-07-25 12:12:47.474100] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.329 [2024-07-25 12:12:47.474110] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.329 [2024-07-25 12:12:47.474118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.329 [2024-07-25 12:12:47.476906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.329 [2024-07-25 12:12:47.485682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.329 [2024-07-25 12:12:47.486344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.329 [2024-07-25 12:12:47.486389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.329 [2024-07-25 12:12:47.486412] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.329 [2024-07-25 12:12:47.486992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.329 [2024-07-25 12:12:47.487526] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.329 [2024-07-25 12:12:47.487536] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.329 [2024-07-25 12:12:47.487542] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.329 [2024-07-25 12:12:47.490226] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.329 [2024-07-25 12:12:47.498701] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.329 [2024-07-25 12:12:47.499407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.329 [2024-07-25 12:12:47.499452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.329 [2024-07-25 12:12:47.499474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.329 [2024-07-25 12:12:47.499899] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.329 [2024-07-25 12:12:47.500086] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.329 [2024-07-25 12:12:47.500096] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.329 [2024-07-25 12:12:47.500103] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.329 [2024-07-25 12:12:47.502845] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.329 [2024-07-25 12:12:47.511817] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.329 [2024-07-25 12:12:47.512395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.329 [2024-07-25 12:12:47.512412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.329 [2024-07-25 12:12:47.512420] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.329 [2024-07-25 12:12:47.512598] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.329 [2024-07-25 12:12:47.512777] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.329 [2024-07-25 12:12:47.512790] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.329 [2024-07-25 12:12:47.512797] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.329 [2024-07-25 12:12:47.515633] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.329 [2024-07-25 12:12:47.525017] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.329 [2024-07-25 12:12:47.525741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.329 [2024-07-25 12:12:47.525759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.329 [2024-07-25 12:12:47.525766] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.329 [2024-07-25 12:12:47.525950] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.329 [2024-07-25 12:12:47.526140] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.329 [2024-07-25 12:12:47.526150] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.329 [2024-07-25 12:12:47.526159] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.329 [2024-07-25 12:12:47.529055] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.329 [2024-07-25 12:12:47.538184] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.329 [2024-07-25 12:12:47.538853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.329 [2024-07-25 12:12:47.538870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.329 [2024-07-25 12:12:47.538878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.330 [2024-07-25 12:12:47.539068] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.330 [2024-07-25 12:12:47.539252] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.330 [2024-07-25 12:12:47.539262] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.330 [2024-07-25 12:12:47.539270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.330 [2024-07-25 12:12:47.542288] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.330 [2024-07-25 12:12:47.551664] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.330 [2024-07-25 12:12:47.552396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.330 [2024-07-25 12:12:47.552414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.330 [2024-07-25 12:12:47.552422] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.330 [2024-07-25 12:12:47.552618] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.330 [2024-07-25 12:12:47.552815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.330 [2024-07-25 12:12:47.552825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.330 [2024-07-25 12:12:47.552832] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.330 [2024-07-25 12:12:47.555943] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.330 [2024-07-25 12:12:47.565142] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.330 [2024-07-25 12:12:47.565868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.330 [2024-07-25 12:12:47.565886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.330 [2024-07-25 12:12:47.565894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.330 [2024-07-25 12:12:47.566094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.330 [2024-07-25 12:12:47.566290] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.330 [2024-07-25 12:12:47.566301] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.330 [2024-07-25 12:12:47.566309] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.330 [2024-07-25 12:12:47.569423] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.591 [2024-07-25 12:12:47.578580] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.591 [2024-07-25 12:12:47.579307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.591 [2024-07-25 12:12:47.579325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.591 [2024-07-25 12:12:47.579333] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.591 [2024-07-25 12:12:47.579528] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.591 [2024-07-25 12:12:47.579724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.591 [2024-07-25 12:12:47.579734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.591 [2024-07-25 12:12:47.579741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.591 [2024-07-25 12:12:47.582865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.591 [2024-07-25 12:12:47.592060] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.591 [2024-07-25 12:12:47.592801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.591 [2024-07-25 12:12:47.592818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.591 [2024-07-25 12:12:47.592825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.591 [2024-07-25 12:12:47.593007] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.591 [2024-07-25 12:12:47.593196] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.591 [2024-07-25 12:12:47.593206] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.591 [2024-07-25 12:12:47.593213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.591 [2024-07-25 12:12:47.596242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.591 [2024-07-25 12:12:47.605432] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.591 [2024-07-25 12:12:47.606127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.591 [2024-07-25 12:12:47.606145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.591 [2024-07-25 12:12:47.606152] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.592 [2024-07-25 12:12:47.606335] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.592 [2024-07-25 12:12:47.606514] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.592 [2024-07-25 12:12:47.606523] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.592 [2024-07-25 12:12:47.606530] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.592 [2024-07-25 12:12:47.609482] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.592 [2024-07-25 12:12:47.618621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.592 [2024-07-25 12:12:47.619334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.592 [2024-07-25 12:12:47.619352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.592 [2024-07-25 12:12:47.619359] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.592 [2024-07-25 12:12:47.619541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.592 [2024-07-25 12:12:47.619724] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.592 [2024-07-25 12:12:47.619734] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.592 [2024-07-25 12:12:47.619741] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.592 [2024-07-25 12:12:47.622675] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.592 [2024-07-25 12:12:47.631795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.592 [2024-07-25 12:12:47.632504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.592 [2024-07-25 12:12:47.632520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.592 [2024-07-25 12:12:47.632527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.592 [2024-07-25 12:12:47.632704] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.592 [2024-07-25 12:12:47.632882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.592 [2024-07-25 12:12:47.632892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.592 [2024-07-25 12:12:47.632899] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.592 [2024-07-25 12:12:47.635830] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.592 [2024-07-25 12:12:47.644991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.592 [2024-07-25 12:12:47.645652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.592 [2024-07-25 12:12:47.645670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.592 [2024-07-25 12:12:47.645677] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.592 [2024-07-25 12:12:47.645860] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.592 [2024-07-25 12:12:47.646050] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.592 [2024-07-25 12:12:47.646059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.592 [2024-07-25 12:12:47.646069] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.592 [2024-07-25 12:12:47.648986] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.592 [2024-07-25 12:12:47.658293] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.592 [2024-07-25 12:12:47.659006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.592 [2024-07-25 12:12:47.659023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.592 [2024-07-25 12:12:47.659030] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.592 [2024-07-25 12:12:47.659218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.592 [2024-07-25 12:12:47.659403] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.592 [2024-07-25 12:12:47.659413] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.592 [2024-07-25 12:12:47.659420] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.592 [2024-07-25 12:12:47.662342] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.592 [2024-07-25 12:12:47.671758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.592 [2024-07-25 12:12:47.672501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.592 [2024-07-25 12:12:47.672519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.592 [2024-07-25 12:12:47.672527] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.592 [2024-07-25 12:12:47.672721] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.592 [2024-07-25 12:12:47.672916] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.592 [2024-07-25 12:12:47.672926] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.592 [2024-07-25 12:12:47.672934] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.592 [2024-07-25 12:12:47.676052] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.592 [2024-07-25 12:12:47.685256] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.592 [2024-07-25 12:12:47.685961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.592 [2024-07-25 12:12:47.685979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.592 [2024-07-25 12:12:47.685987] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.592 [2024-07-25 12:12:47.686187] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.592 [2024-07-25 12:12:47.686383] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.592 [2024-07-25 12:12:47.686394] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.592 [2024-07-25 12:12:47.686401] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.592 [2024-07-25 12:12:47.689523] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.592 [2024-07-25 12:12:47.698524] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.592 [2024-07-25 12:12:47.699265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.592 [2024-07-25 12:12:47.699314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.592 [2024-07-25 12:12:47.699336] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.592 [2024-07-25 12:12:47.699915] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.592 [2024-07-25 12:12:47.700511] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.592 [2024-07-25 12:12:47.700538] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.592 [2024-07-25 12:12:47.700559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.592 [2024-07-25 12:12:47.703501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.592 [2024-07-25 12:12:47.711627] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.592 [2024-07-25 12:12:47.712276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.592 [2024-07-25 12:12:47.712293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.592 [2024-07-25 12:12:47.712301] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.592 [2024-07-25 12:12:47.712478] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.592 [2024-07-25 12:12:47.712655] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.592 [2024-07-25 12:12:47.712664] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.592 [2024-07-25 12:12:47.712671] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.592 [2024-07-25 12:12:47.715506] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.592 [2024-07-25 12:12:47.724704] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.592 [2024-07-25 12:12:47.725440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.592 [2024-07-25 12:12:47.725457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.592 [2024-07-25 12:12:47.725465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.592 [2024-07-25 12:12:47.725642] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.592 [2024-07-25 12:12:47.725819] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.592 [2024-07-25 12:12:47.725828] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.592 [2024-07-25 12:12:47.725835] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.592 [2024-07-25 12:12:47.728666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.592 [2024-07-25 12:12:47.737868] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.592 [2024-07-25 12:12:47.738506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.592 [2024-07-25 12:12:47.738523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.592 [2024-07-25 12:12:47.738531] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.592 [2024-07-25 12:12:47.738707] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.593 [2024-07-25 12:12:47.738889] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.593 [2024-07-25 12:12:47.738899] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.593 [2024-07-25 12:12:47.738905] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.593 [2024-07-25 12:12:47.741741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.593 [2024-07-25 12:12:47.751166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.593 [2024-07-25 12:12:47.751857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.593 [2024-07-25 12:12:47.751873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.593 [2024-07-25 12:12:47.751881] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.593 [2024-07-25 12:12:47.752069] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.593 [2024-07-25 12:12:47.752264] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.593 [2024-07-25 12:12:47.752273] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.593 [2024-07-25 12:12:47.752280] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.593 [2024-07-25 12:12:47.755110] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.593 [2024-07-25 12:12:47.764308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.593 [2024-07-25 12:12:47.765031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.593 [2024-07-25 12:12:47.765084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.593 [2024-07-25 12:12:47.765105] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.593 [2024-07-25 12:12:47.765518] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.593 [2024-07-25 12:12:47.765697] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.593 [2024-07-25 12:12:47.765707] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.593 [2024-07-25 12:12:47.765713] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.593 [2024-07-25 12:12:47.768545] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.593 [2024-07-25 12:12:47.777409] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.593 [2024-07-25 12:12:47.778119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.593 [2024-07-25 12:12:47.778161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.593 [2024-07-25 12:12:47.778182] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.593 [2024-07-25 12:12:47.778552] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.593 [2024-07-25 12:12:47.778726] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.593 [2024-07-25 12:12:47.778735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.593 [2024-07-25 12:12:47.778742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.593 [2024-07-25 12:12:47.781475] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.593 [2024-07-25 12:12:47.790348] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.593 [2024-07-25 12:12:47.791061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.593 [2024-07-25 12:12:47.791103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.593 [2024-07-25 12:12:47.791124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.593 [2024-07-25 12:12:47.791702] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.593 [2024-07-25 12:12:47.791929] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.593 [2024-07-25 12:12:47.791938] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.593 [2024-07-25 12:12:47.791945] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.593 [2024-07-25 12:12:47.794637] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.593 [2024-07-25 12:12:47.803245] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.593 [2024-07-25 12:12:47.803936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.593 [2024-07-25 12:12:47.803951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.593 [2024-07-25 12:12:47.803958] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.593 [2024-07-25 12:12:47.804145] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.593 [2024-07-25 12:12:47.804318] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.593 [2024-07-25 12:12:47.804326] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.593 [2024-07-25 12:12:47.804333] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.593 [2024-07-25 12:12:47.806995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.593 [2024-07-25 12:12:47.816094] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.593 [2024-07-25 12:12:47.816777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.593 [2024-07-25 12:12:47.816820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.593 [2024-07-25 12:12:47.816843] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.593 [2024-07-25 12:12:47.817126] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.593 [2024-07-25 12:12:47.817312] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.593 [2024-07-25 12:12:47.817322] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.593 [2024-07-25 12:12:47.817329] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.593 [2024-07-25 12:12:47.819923] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.593 [2024-07-25 12:12:47.829012] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.593 [2024-07-25 12:12:47.829700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.593 [2024-07-25 12:12:47.829742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.593 [2024-07-25 12:12:47.829774] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.593 [2024-07-25 12:12:47.830130] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.593 [2024-07-25 12:12:47.830295] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.593 [2024-07-25 12:12:47.830304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.593 [2024-07-25 12:12:47.830310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.593 [2024-07-25 12:12:47.832967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.855 [2024-07-25 12:12:47.842034] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.855 [2024-07-25 12:12:47.842475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.855 [2024-07-25 12:12:47.842518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.855 [2024-07-25 12:12:47.842540] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.855 [2024-07-25 12:12:47.842931] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.855 [2024-07-25 12:12:47.843102] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.855 [2024-07-25 12:12:47.843112] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.855 [2024-07-25 12:12:47.843118] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.855 [2024-07-25 12:12:47.845824] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.855 [2024-07-25 12:12:47.854957] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.855 [2024-07-25 12:12:47.855569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.855 [2024-07-25 12:12:47.855612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.855 [2024-07-25 12:12:47.855635] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.855 [2024-07-25 12:12:47.856001] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.855 [2024-07-25 12:12:47.856194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.855 [2024-07-25 12:12:47.856205] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.855 [2024-07-25 12:12:47.856212] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.855 [2024-07-25 12:12:47.858877] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.855 [2024-07-25 12:12:47.868023] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.855 [2024-07-25 12:12:47.868660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.855 [2024-07-25 12:12:47.868676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.855 [2024-07-25 12:12:47.868683] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.855 [2024-07-25 12:12:47.868845] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.855 [2024-07-25 12:12:47.869009] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.855 [2024-07-25 12:12:47.869020] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.855 [2024-07-25 12:12:47.869026] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.855 [2024-07-25 12:12:47.871671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.855 [2024-07-25 12:12:47.880981] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.855 [2024-07-25 12:12:47.881602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.855 [2024-07-25 12:12:47.881645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.855 [2024-07-25 12:12:47.881667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.855 [2024-07-25 12:12:47.882033] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.855 [2024-07-25 12:12:47.882204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.855 [2024-07-25 12:12:47.882214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.855 [2024-07-25 12:12:47.882220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.855 [2024-07-25 12:12:47.884906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.855 [2024-07-25 12:12:47.893985] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.855 [2024-07-25 12:12:47.894690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.855 [2024-07-25 12:12:47.894733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.855 [2024-07-25 12:12:47.894754] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.855 [2024-07-25 12:12:47.895344] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.855 [2024-07-25 12:12:47.895927] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.855 [2024-07-25 12:12:47.895954] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.855 [2024-07-25 12:12:47.895960] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.855 [2024-07-25 12:12:47.898589] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.855 [2024-07-25 12:12:47.906800] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.855 [2024-07-25 12:12:47.907490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.855 [2024-07-25 12:12:47.907533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.856 [2024-07-25 12:12:47.907555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.856 [2024-07-25 12:12:47.908024] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.856 [2024-07-25 12:12:47.908216] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.856 [2024-07-25 12:12:47.908227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.856 [2024-07-25 12:12:47.908233] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.856 [2024-07-25 12:12:47.910892] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.856 [2024-07-25 12:12:47.919727] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.856 [2024-07-25 12:12:47.920409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.856 [2024-07-25 12:12:47.920453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.856 [2024-07-25 12:12:47.920474] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.856 [2024-07-25 12:12:47.920911] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.856 [2024-07-25 12:12:47.921097] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.856 [2024-07-25 12:12:47.921107] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.856 [2024-07-25 12:12:47.921114] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.856 [2024-07-25 12:12:47.923779] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.856 [2024-07-25 12:12:47.932630] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.856 [2024-07-25 12:12:47.933337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.856 [2024-07-25 12:12:47.933379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.856 [2024-07-25 12:12:47.933400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.856 [2024-07-25 12:12:47.933874] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.856 [2024-07-25 12:12:47.934037] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.856 [2024-07-25 12:12:47.934052] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.856 [2024-07-25 12:12:47.934059] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.856 [2024-07-25 12:12:47.936794] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.856 [2024-07-25 12:12:47.945421] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.856 [2024-07-25 12:12:47.946125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.856 [2024-07-25 12:12:47.946167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.856 [2024-07-25 12:12:47.946188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.856 [2024-07-25 12:12:47.946766] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.856 [2024-07-25 12:12:47.947194] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.856 [2024-07-25 12:12:47.947204] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.856 [2024-07-25 12:12:47.947211] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.856 [2024-07-25 12:12:47.949873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.856 [2024-07-25 12:12:47.958233] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.856 [2024-07-25 12:12:47.958949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.856 [2024-07-25 12:12:47.958991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.856 [2024-07-25 12:12:47.959012] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.856 [2024-07-25 12:12:47.959294] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.856 [2024-07-25 12:12:47.959468] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.856 [2024-07-25 12:12:47.959478] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.856 [2024-07-25 12:12:47.959484] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.856 [2024-07-25 12:12:47.962135] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.856 [2024-07-25 12:12:47.971302] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.856 [2024-07-25 12:12:47.971929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.856 [2024-07-25 12:12:47.971945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.856 [2024-07-25 12:12:47.971953] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.856 [2024-07-25 12:12:47.972143] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.856 [2024-07-25 12:12:47.972315] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.856 [2024-07-25 12:12:47.972325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.856 [2024-07-25 12:12:47.972332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.856 [2024-07-25 12:12:47.975030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.856 [2024-07-25 12:12:47.984322] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.856 [2024-07-25 12:12:47.985019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.856 [2024-07-25 12:12:47.985072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.856 [2024-07-25 12:12:47.985095] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.856 [2024-07-25 12:12:47.985414] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.856 [2024-07-25 12:12:47.985578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.856 [2024-07-25 12:12:47.985587] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.856 [2024-07-25 12:12:47.985594] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.856 [2024-07-25 12:12:47.988286] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.856 [2024-07-25 12:12:47.997112] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.856 [2024-07-25 12:12:47.997790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.856 [2024-07-25 12:12:47.997831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.856 [2024-07-25 12:12:47.997854] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.856 [2024-07-25 12:12:47.998222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.856 [2024-07-25 12:12:47.998397] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.856 [2024-07-25 12:12:47.998406] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.856 [2024-07-25 12:12:47.998417] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.856 [2024-07-25 12:12:48.001139] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.856 [2024-07-25 12:12:48.010082] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.856 [2024-07-25 12:12:48.010777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.856 [2024-07-25 12:12:48.010818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.856 [2024-07-25 12:12:48.010839] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.856 [2024-07-25 12:12:48.011214] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.856 [2024-07-25 12:12:48.011388] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.856 [2024-07-25 12:12:48.011398] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.856 [2024-07-25 12:12:48.011404] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.856 [2024-07-25 12:12:48.014056] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.856 [2024-07-25 12:12:48.022875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.856 [2024-07-25 12:12:48.023575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.856 [2024-07-25 12:12:48.023618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.856 [2024-07-25 12:12:48.023639] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.856 [2024-07-25 12:12:48.024232] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.856 [2024-07-25 12:12:48.024516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.856 [2024-07-25 12:12:48.024526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.856 [2024-07-25 12:12:48.024532] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.856 [2024-07-25 12:12:48.027272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.856 [2024-07-25 12:12:48.035784] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.856 [2024-07-25 12:12:48.036401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.857 [2024-07-25 12:12:48.036416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.857 [2024-07-25 12:12:48.036423] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.857 [2024-07-25 12:12:48.036585] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.857 [2024-07-25 12:12:48.036748] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.857 [2024-07-25 12:12:48.036756] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.857 [2024-07-25 12:12:48.036763] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.857 [2024-07-25 12:12:48.039453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.857 [2024-07-25 12:12:48.048665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.857 [2024-07-25 12:12:48.049379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.857 [2024-07-25 12:12:48.049421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.857 [2024-07-25 12:12:48.049442] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.857 [2024-07-25 12:12:48.049929] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.857 [2024-07-25 12:12:48.050115] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.857 [2024-07-25 12:12:48.050125] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.857 [2024-07-25 12:12:48.050132] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.857 [2024-07-25 12:12:48.052799] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.857 [2024-07-25 12:12:48.061483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.857 [2024-07-25 12:12:48.062163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.857 [2024-07-25 12:12:48.062195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.857 [2024-07-25 12:12:48.062219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.857 [2024-07-25 12:12:48.062797] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.857 [2024-07-25 12:12:48.063002] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.857 [2024-07-25 12:12:48.063011] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.857 [2024-07-25 12:12:48.063017] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.857 [2024-07-25 12:12:48.065708] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.857 [2024-07-25 12:12:48.074280] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.857 [2024-07-25 12:12:48.074958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.857 [2024-07-25 12:12:48.074999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.857 [2024-07-25 12:12:48.075020] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.857 [2024-07-25 12:12:48.075356] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.857 [2024-07-25 12:12:48.075530] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.857 [2024-07-25 12:12:48.075539] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.857 [2024-07-25 12:12:48.075545] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.857 [2024-07-25 12:12:48.078192] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.857 [2024-07-25 12:12:48.087167] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.857 [2024-07-25 12:12:48.087794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.857 [2024-07-25 12:12:48.087836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.857 [2024-07-25 12:12:48.087857] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.857 [2024-07-25 12:12:48.088452] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.857 [2024-07-25 12:12:48.088672] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.857 [2024-07-25 12:12:48.088682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.857 [2024-07-25 12:12:48.088688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:00.857 [2024-07-25 12:12:48.091335] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:00.857 [2024-07-25 12:12:48.100166] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:00.857 [2024-07-25 12:12:48.100812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:00.857 [2024-07-25 12:12:48.100828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:00.857 [2024-07-25 12:12:48.100835] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:00.857 [2024-07-25 12:12:48.101006] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:00.857 [2024-07-25 12:12:48.101204] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:00.857 [2024-07-25 12:12:48.101214] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:00.857 [2024-07-25 12:12:48.101221] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.119 [2024-07-25 12:12:48.103989] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.119 [2024-07-25 12:12:48.113065] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.119 [2024-07-25 12:12:48.113763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.119 [2024-07-25 12:12:48.113806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.119 [2024-07-25 12:12:48.113827] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.119 [2024-07-25 12:12:48.114420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.119 [2024-07-25 12:12:48.114711] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.119 [2024-07-25 12:12:48.114721] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.119 [2024-07-25 12:12:48.114727] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.119 [2024-07-25 12:12:48.117365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.119 [2024-07-25 12:12:48.125875] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.119 [2024-07-25 12:12:48.126597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.119 [2024-07-25 12:12:48.126639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.119 [2024-07-25 12:12:48.126660] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.119 [2024-07-25 12:12:48.127253] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.119 [2024-07-25 12:12:48.127642] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.119 [2024-07-25 12:12:48.127651] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.119 [2024-07-25 12:12:48.127658] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.119 [2024-07-25 12:12:48.130300] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.119 [2024-07-25 12:12:48.138711] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.119 [2024-07-25 12:12:48.139400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.119 [2024-07-25 12:12:48.139444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.119 [2024-07-25 12:12:48.139465] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.119 [2024-07-25 12:12:48.140054] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.119 [2024-07-25 12:12:48.140637] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.119 [2024-07-25 12:12:48.140662] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.119 [2024-07-25 12:12:48.140688] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.119 [2024-07-25 12:12:48.144741] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.119 [2024-07-25 12:12:48.152439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.119 [2024-07-25 12:12:48.153154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.119 [2024-07-25 12:12:48.153199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.119 [2024-07-25 12:12:48.153221] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.119 [2024-07-25 12:12:48.153699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.119 [2024-07-25 12:12:48.153868] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.119 [2024-07-25 12:12:48.153877] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.119 [2024-07-25 12:12:48.153884] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.119 [2024-07-25 12:12:48.156617] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.119 [2024-07-25 12:12:48.165368] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.119 [2024-07-25 12:12:48.166078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.119 [2024-07-25 12:12:48.166121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.119 [2024-07-25 12:12:48.166142] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.119 [2024-07-25 12:12:48.166511] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.119 [2024-07-25 12:12:48.166676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.119 [2024-07-25 12:12:48.166685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.119 [2024-07-25 12:12:48.166691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.119 [2024-07-25 12:12:48.169383] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.119 [2024-07-25 12:12:48.178199] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.119 [2024-07-25 12:12:48.178901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.119 [2024-07-25 12:12:48.178943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.119 [2024-07-25 12:12:48.178972] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.119 [2024-07-25 12:12:48.179455] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.119 [2024-07-25 12:12:48.179629] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.119 [2024-07-25 12:12:48.179638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.119 [2024-07-25 12:12:48.179645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.119 [2024-07-25 12:12:48.182293] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.119 [2024-07-25 12:12:48.191116] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.119 [2024-07-25 12:12:48.191816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.119 [2024-07-25 12:12:48.191857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.120 [2024-07-25 12:12:48.191879] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.120 [2024-07-25 12:12:48.192481] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.120 [2024-07-25 12:12:48.192662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.120 [2024-07-25 12:12:48.192672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.120 [2024-07-25 12:12:48.192678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.120 [2024-07-25 12:12:48.195319] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.120 [2024-07-25 12:12:48.204088] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.120 [2024-07-25 12:12:48.204755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.120 [2024-07-25 12:12:48.204771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.120 [2024-07-25 12:12:48.204778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.120 [2024-07-25 12:12:48.204940] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.120 [2024-07-25 12:12:48.205127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.120 [2024-07-25 12:12:48.205136] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.120 [2024-07-25 12:12:48.205143] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.120 [2024-07-25 12:12:48.207812] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.120 [2024-07-25 12:12:48.216934] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.120 [2024-07-25 12:12:48.217554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.120 [2024-07-25 12:12:48.217596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.120 [2024-07-25 12:12:48.217618] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.120 [2024-07-25 12:12:48.217975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.120 [2024-07-25 12:12:48.218165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.120 [2024-07-25 12:12:48.218179] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.120 [2024-07-25 12:12:48.218186] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.120 [2024-07-25 12:12:48.221022] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.120 [2024-07-25 12:12:48.229945] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.120 [2024-07-25 12:12:48.230665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.120 [2024-07-25 12:12:48.230708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.120 [2024-07-25 12:12:48.230730] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.120 [2024-07-25 12:12:48.231289] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.120 [2024-07-25 12:12:48.231546] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.120 [2024-07-25 12:12:48.231559] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.120 [2024-07-25 12:12:48.231569] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.120 [2024-07-25 12:12:48.235630] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.120 [2024-07-25 12:12:48.243705] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.120 [2024-07-25 12:12:48.244318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.120 [2024-07-25 12:12:48.244362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.120 [2024-07-25 12:12:48.244384] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.120 [2024-07-25 12:12:48.244827] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.120 [2024-07-25 12:12:48.244996] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.120 [2024-07-25 12:12:48.245005] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.120 [2024-07-25 12:12:48.245012] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.120 [2024-07-25 12:12:48.247746] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.120 [2024-07-25 12:12:48.256662] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.120 [2024-07-25 12:12:48.257280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.120 [2024-07-25 12:12:48.257323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.120 [2024-07-25 12:12:48.257346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.120 [2024-07-25 12:12:48.257733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.120 [2024-07-25 12:12:48.257897] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.120 [2024-07-25 12:12:48.257906] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.120 [2024-07-25 12:12:48.257913] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.120 [2024-07-25 12:12:48.260602] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.120 [2024-07-25 12:12:48.269447] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.120 [2024-07-25 12:12:48.270154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.120 [2024-07-25 12:12:48.270197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.120 [2024-07-25 12:12:48.270219] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.120 [2024-07-25 12:12:48.270658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.120 [2024-07-25 12:12:48.270823] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.120 [2024-07-25 12:12:48.270833] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.120 [2024-07-25 12:12:48.270839] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.120 [2024-07-25 12:12:48.273532] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.120 [2024-07-25 12:12:48.282349] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.120 [2024-07-25 12:12:48.282978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.120 [2024-07-25 12:12:48.283020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.120 [2024-07-25 12:12:48.283056] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.120 [2024-07-25 12:12:48.283637] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.120 [2024-07-25 12:12:48.284001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.120 [2024-07-25 12:12:48.284010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.120 [2024-07-25 12:12:48.284016] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.120 [2024-07-25 12:12:48.286638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.120 [2024-07-25 12:12:48.295156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.120 [2024-07-25 12:12:48.295859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.120 [2024-07-25 12:12:48.295900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.120 [2024-07-25 12:12:48.295922] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.120 [2024-07-25 12:12:48.296429] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.120 [2024-07-25 12:12:48.296607] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.120 [2024-07-25 12:12:48.296617] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.120 [2024-07-25 12:12:48.296623] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.120 [2024-07-25 12:12:48.299320] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.120 [2024-07-25 12:12:48.308021] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.120 [2024-07-25 12:12:48.308729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.120 [2024-07-25 12:12:48.308771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.120 [2024-07-25 12:12:48.308792] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.120 [2024-07-25 12:12:48.309245] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.120 [2024-07-25 12:12:48.309419] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.120 [2024-07-25 12:12:48.309429] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.120 [2024-07-25 12:12:48.309435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.120 [2024-07-25 12:12:48.312089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.120 [2024-07-25 12:12:48.320904] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.120 [2024-07-25 12:12:48.321582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.120 [2024-07-25 12:12:48.321625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.121 [2024-07-25 12:12:48.321647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.121 [2024-07-25 12:12:48.322011] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.121 [2024-07-25 12:12:48.322203] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.121 [2024-07-25 12:12:48.322213] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.121 [2024-07-25 12:12:48.322220] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.121 [2024-07-25 12:12:48.326161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.121 [2024-07-25 12:12:48.334326] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.121 [2024-07-25 12:12:48.335031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.121 [2024-07-25 12:12:48.335085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.121 [2024-07-25 12:12:48.335107] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.121 [2024-07-25 12:12:48.335686] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.121 [2024-07-25 12:12:48.336182] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.121 [2024-07-25 12:12:48.336192] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.121 [2024-07-25 12:12:48.336198] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.121 [2024-07-25 12:12:48.338904] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.121 [2024-07-25 12:12:48.347173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.121 [2024-07-25 12:12:48.347866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.121 [2024-07-25 12:12:48.347908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.121 [2024-07-25 12:12:48.347929] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.121 [2024-07-25 12:12:48.348521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.121 [2024-07-25 12:12:48.349095] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.121 [2024-07-25 12:12:48.349105] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.121 [2024-07-25 12:12:48.349115] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.121 [2024-07-25 12:12:48.351727] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.121 [2024-07-25 12:12:48.360029] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.121 [2024-07-25 12:12:48.360718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.121 [2024-07-25 12:12:48.360761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.121 [2024-07-25 12:12:48.360782] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.121 [2024-07-25 12:12:48.361263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.121 [2024-07-25 12:12:48.361438] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.121 [2024-07-25 12:12:48.361447] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.121 [2024-07-25 12:12:48.361454] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.121 [2024-07-25 12:12:48.364201] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.382 [2024-07-25 12:12:48.373018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.382 [2024-07-25 12:12:48.373665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.382 [2024-07-25 12:12:48.373707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.382 [2024-07-25 12:12:48.373728] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.382 [2024-07-25 12:12:48.374181] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.382 [2024-07-25 12:12:48.374361] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.382 [2024-07-25 12:12:48.374371] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.382 [2024-07-25 12:12:48.374389] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.382 [2024-07-25 12:12:48.377063] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.382 [2024-07-25 12:12:48.386036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.382 [2024-07-25 12:12:48.386742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.382 [2024-07-25 12:12:48.386785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.382 [2024-07-25 12:12:48.386807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.382 [2024-07-25 12:12:48.387070] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.382 [2024-07-25 12:12:48.387234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.382 [2024-07-25 12:12:48.387243] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.382 [2024-07-25 12:12:48.387249] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.382 [2024-07-25 12:12:48.389879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.382 [2024-07-25 12:12:48.398857] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.382 [2024-07-25 12:12:48.399567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.382 [2024-07-25 12:12:48.399609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.382 [2024-07-25 12:12:48.399631] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.382 [2024-07-25 12:12:48.400225] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.382 [2024-07-25 12:12:48.400498] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.382 [2024-07-25 12:12:48.400508] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.382 [2024-07-25 12:12:48.400514] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.382 [2024-07-25 12:12:48.403243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.382 [2024-07-25 12:12:48.411747] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.382 [2024-07-25 12:12:48.412439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.382 [2024-07-25 12:12:48.412483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.382 [2024-07-25 12:12:48.412505] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.382 [2024-07-25 12:12:48.412961] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.382 [2024-07-25 12:12:48.413150] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.383 [2024-07-25 12:12:48.413160] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.383 [2024-07-25 12:12:48.413167] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.383 [2024-07-25 12:12:48.416967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.383 [2024-07-25 12:12:48.425539] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.383 [2024-07-25 12:12:48.426242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.383 [2024-07-25 12:12:48.426285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.383 [2024-07-25 12:12:48.426306] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.383 [2024-07-25 12:12:48.426577] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.383 [2024-07-25 12:12:48.426759] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.383 [2024-07-25 12:12:48.426768] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.383 [2024-07-25 12:12:48.426775] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.383 [2024-07-25 12:12:48.429494] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.383 [2024-07-25 12:12:48.438343] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.383 [2024-07-25 12:12:48.439056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.383 [2024-07-25 12:12:48.439099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.383 [2024-07-25 12:12:48.439120] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.383 [2024-07-25 12:12:48.439382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.383 [2024-07-25 12:12:48.439551] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.383 [2024-07-25 12:12:48.439560] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.383 [2024-07-25 12:12:48.439566] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.383 [2024-07-25 12:12:48.442255] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.383 [2024-07-25 12:12:48.451225] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.383 [2024-07-25 12:12:48.451929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.383 [2024-07-25 12:12:48.451971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.383 [2024-07-25 12:12:48.451992] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.383 [2024-07-25 12:12:48.452396] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.383 [2024-07-25 12:12:48.452570] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.383 [2024-07-25 12:12:48.452579] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.383 [2024-07-25 12:12:48.452586] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.383 [2024-07-25 12:12:48.455386] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.383 [2024-07-25 12:12:48.464419] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.383 [2024-07-25 12:12:48.465137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.383 [2024-07-25 12:12:48.465180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.383 [2024-07-25 12:12:48.465202] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.383 [2024-07-25 12:12:48.465693] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.383 [2024-07-25 12:12:48.465871] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.383 [2024-07-25 12:12:48.465881] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.383 [2024-07-25 12:12:48.465887] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.383 [2024-07-25 12:12:48.468723] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.383 [2024-07-25 12:12:48.477608] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.383 [2024-07-25 12:12:48.478258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.383 [2024-07-25 12:12:48.478275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.383 [2024-07-25 12:12:48.478283] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.383 [2024-07-25 12:12:48.478460] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.383 [2024-07-25 12:12:48.478639] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.383 [2024-07-25 12:12:48.478649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.383 [2024-07-25 12:12:48.478656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.383 [2024-07-25 12:12:48.481451] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.383 [2024-07-25 12:12:48.490591] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.383 [2024-07-25 12:12:48.491306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.383 [2024-07-25 12:12:48.491350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.383 [2024-07-25 12:12:48.491373] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.383 [2024-07-25 12:12:48.491605] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.383 [2024-07-25 12:12:48.491775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.383 [2024-07-25 12:12:48.491784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.383 [2024-07-25 12:12:48.491791] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.383 [2024-07-25 12:12:48.494623] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.383 [2024-07-25 12:12:48.503453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.383 [2024-07-25 12:12:48.504131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.383 [2024-07-25 12:12:48.504188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.383 [2024-07-25 12:12:48.504210] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.383 [2024-07-25 12:12:48.504557] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.383 [2024-07-25 12:12:48.504721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.383 [2024-07-25 12:12:48.504730] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.383 [2024-07-25 12:12:48.504737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.383 [2024-07-25 12:12:48.507430] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.383 [2024-07-25 12:12:48.516336] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.383 [2024-07-25 12:12:48.517035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.383 [2024-07-25 12:12:48.517091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.383 [2024-07-25 12:12:48.517112] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.383 [2024-07-25 12:12:48.517691] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.383 [2024-07-25 12:12:48.517879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.383 [2024-07-25 12:12:48.517888] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.383 [2024-07-25 12:12:48.517895] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.383 [2024-07-25 12:12:48.520584] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.383 [2024-07-25 12:12:48.529346] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.383 [2024-07-25 12:12:48.530064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.383 [2024-07-25 12:12:48.530107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.383 [2024-07-25 12:12:48.530136] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.383 [2024-07-25 12:12:48.530666] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.383 [2024-07-25 12:12:48.530829] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.383 [2024-07-25 12:12:48.530838] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.383 [2024-07-25 12:12:48.530844] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.383 [2024-07-25 12:12:48.533533] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.383 [2024-07-25 12:12:48.542195] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.383 [2024-07-25 12:12:48.542898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.383 [2024-07-25 12:12:48.542940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.384 [2024-07-25 12:12:48.542961] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.384 [2024-07-25 12:12:48.543355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.384 [2024-07-25 12:12:48.543531] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.384 [2024-07-25 12:12:48.543540] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.384 [2024-07-25 12:12:48.543547] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.384 [2024-07-25 12:12:48.546196] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.384 [2024-07-25 12:12:48.555020] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.384 [2024-07-25 12:12:48.555721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.384 [2024-07-25 12:12:48.555763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.384 [2024-07-25 12:12:48.555784] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.384 [2024-07-25 12:12:48.556377] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.384 [2024-07-25 12:12:48.556778] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.384 [2024-07-25 12:12:48.556788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.384 [2024-07-25 12:12:48.556795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.384 [2024-07-25 12:12:48.559425] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.384 [2024-07-25 12:12:48.568035] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.384 [2024-07-25 12:12:48.568724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.384 [2024-07-25 12:12:48.568767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.384 [2024-07-25 12:12:48.568788] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.384 [2024-07-25 12:12:48.569382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.384 [2024-07-25 12:12:48.569804] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.384 [2024-07-25 12:12:48.569817] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.384 [2024-07-25 12:12:48.569823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.384 [2024-07-25 12:12:48.572565] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.384 [2024-07-25 12:12:48.581031] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.384 [2024-07-25 12:12:48.581741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.384 [2024-07-25 12:12:48.581784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.384 [2024-07-25 12:12:48.581807] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.384 [2024-07-25 12:12:48.582028] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.384 [2024-07-25 12:12:48.582222] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.384 [2024-07-25 12:12:48.582233] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.384 [2024-07-25 12:12:48.582239] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.384 [2024-07-25 12:12:48.584902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.384 [2024-07-25 12:12:48.593991] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.384 [2024-07-25 12:12:48.594732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.384 [2024-07-25 12:12:48.594775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.384 [2024-07-25 12:12:48.594796] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.384 [2024-07-25 12:12:48.595323] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.384 [2024-07-25 12:12:48.595578] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.384 [2024-07-25 12:12:48.595591] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.384 [2024-07-25 12:12:48.595600] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.384 [2024-07-25 12:12:48.599657] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.384 [2024-07-25 12:12:48.607456] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.384 [2024-07-25 12:12:48.608170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.384 [2024-07-25 12:12:48.608226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.384 [2024-07-25 12:12:48.608248] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.384 [2024-07-25 12:12:48.608558] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.384 [2024-07-25 12:12:48.608732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.384 [2024-07-25 12:12:48.608741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.384 [2024-07-25 12:12:48.608748] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.384 [2024-07-25 12:12:48.611453] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.384 [2024-07-25 12:12:48.620363] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.384 [2024-07-25 12:12:48.621023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.384 [2024-07-25 12:12:48.621080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.384 [2024-07-25 12:12:48.621103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.384 [2024-07-25 12:12:48.621683] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.384 [2024-07-25 12:12:48.622029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.384 [2024-07-25 12:12:48.622039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.384 [2024-07-25 12:12:48.622050] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.384 [2024-07-25 12:12:48.624693] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.645 [2024-07-25 12:12:48.633439] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.645 [2024-07-25 12:12:48.634140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.645 [2024-07-25 12:12:48.634182] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.645 [2024-07-25 12:12:48.634204] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.645 [2024-07-25 12:12:48.634783] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.645 [2024-07-25 12:12:48.635080] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.645 [2024-07-25 12:12:48.635090] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.645 [2024-07-25 12:12:48.635096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.645 [2024-07-25 12:12:48.637833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.645 [2024-07-25 12:12:48.646268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.645 [2024-07-25 12:12:48.646703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.645 [2024-07-25 12:12:48.646745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.645 [2024-07-25 12:12:48.646768] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.645 [2024-07-25 12:12:48.647362] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.645 [2024-07-25 12:12:48.647882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.645 [2024-07-25 12:12:48.647892] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.645 [2024-07-25 12:12:48.647898] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.645 [2024-07-25 12:12:48.650524] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.645 [2024-07-25 12:12:48.659128] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.645 [2024-07-25 12:12:48.659847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.645 [2024-07-25 12:12:48.659889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.645 [2024-07-25 12:12:48.659911] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.645 [2024-07-25 12:12:48.660180] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.645 [2024-07-25 12:12:48.660354] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.645 [2024-07-25 12:12:48.660364] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.645 [2024-07-25 12:12:48.660370] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.645 [2024-07-25 12:12:48.663026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.645 [2024-07-25 12:12:48.672001] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.645 [2024-07-25 12:12:48.672632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.645 [2024-07-25 12:12:48.672648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.645 [2024-07-25 12:12:48.672655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.646 [2024-07-25 12:12:48.672819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.646 [2024-07-25 12:12:48.672982] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.646 [2024-07-25 12:12:48.672990] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.646 [2024-07-25 12:12:48.672997] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.646 [2024-07-25 12:12:48.675688] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.646 [2024-07-25 12:12:48.684905] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.646 [2024-07-25 12:12:48.685601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.646 [2024-07-25 12:12:48.685645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.646 [2024-07-25 12:12:48.685668] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.646 [2024-07-25 12:12:48.686102] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.646 [2024-07-25 12:12:48.686357] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.646 [2024-07-25 12:12:48.686370] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.646 [2024-07-25 12:12:48.686380] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.646 [2024-07-25 12:12:48.690436] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.646 [2024-07-25 12:12:48.698152] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.646 [2024-07-25 12:12:48.698857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.646 [2024-07-25 12:12:48.698901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.646 [2024-07-25 12:12:48.698923] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.646 [2024-07-25 12:12:48.699189] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.646 [2024-07-25 12:12:48.699363] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.646 [2024-07-25 12:12:48.699373] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.646 [2024-07-25 12:12:48.699383] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.646 [2024-07-25 12:12:48.702134] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.646 [2024-07-25 12:12:48.711079] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.646 [2024-07-25 12:12:48.711784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.646 [2024-07-25 12:12:48.711827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.646 [2024-07-25 12:12:48.711849] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.646 [2024-07-25 12:12:48.712166] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.646 [2024-07-25 12:12:48.712352] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.646 [2024-07-25 12:12:48.712362] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.646 [2024-07-25 12:12:48.712368] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.646 [2024-07-25 12:12:48.715030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.646 [2024-07-25 12:12:48.723911] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.646 [2024-07-25 12:12:48.724592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.646 [2024-07-25 12:12:48.724635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.646 [2024-07-25 12:12:48.724657] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.646 [2024-07-25 12:12:48.725118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.646 [2024-07-25 12:12:48.725292] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.646 [2024-07-25 12:12:48.725302] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.646 [2024-07-25 12:12:48.725308] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.646 [2024-07-25 12:12:48.728168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.646 [2024-07-25 12:12:48.736861] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.646 [2024-07-25 12:12:48.737490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.646 [2024-07-25 12:12:48.737528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.646 [2024-07-25 12:12:48.737551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.646 [2024-07-25 12:12:48.738108] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.646 [2024-07-25 12:12:48.738283] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.646 [2024-07-25 12:12:48.738292] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.646 [2024-07-25 12:12:48.738311] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.646 [2024-07-25 12:12:48.740902] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.646 [2024-07-25 12:12:48.749717] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.646 [2024-07-25 12:12:48.750432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.646 [2024-07-25 12:12:48.750476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.646 [2024-07-25 12:12:48.750499] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.646 [2024-07-25 12:12:48.751077] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.646 [2024-07-25 12:12:48.751275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.646 [2024-07-25 12:12:48.751284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.646 [2024-07-25 12:12:48.751291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.646 [2024-07-25 12:12:48.753879] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.646 [2024-07-25 12:12:48.762889] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.646 [2024-07-25 12:12:48.763530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.646 [2024-07-25 12:12:48.763548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.646 [2024-07-25 12:12:48.763555] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.646 [2024-07-25 12:12:48.763732] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.646 [2024-07-25 12:12:48.763910] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.646 [2024-07-25 12:12:48.763919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.646 [2024-07-25 12:12:48.763926] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.646 [2024-07-25 12:12:48.766756] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.646 [2024-07-25 12:12:48.775949] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.646 [2024-07-25 12:12:48.776642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.646 [2024-07-25 12:12:48.776660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.646 [2024-07-25 12:12:48.776667] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.646 [2024-07-25 12:12:48.776844] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.646 [2024-07-25 12:12:48.777021] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.646 [2024-07-25 12:12:48.777031] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.646 [2024-07-25 12:12:48.777037] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.646 [2024-07-25 12:12:48.779868] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.646 [2024-07-25 12:12:48.789069] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.646 [2024-07-25 12:12:48.789778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.646 [2024-07-25 12:12:48.789795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.646 [2024-07-25 12:12:48.789802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.646 [2024-07-25 12:12:48.789979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.646 [2024-07-25 12:12:48.790167] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.646 [2024-07-25 12:12:48.790178] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.646 [2024-07-25 12:12:48.790184] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.646 [2024-07-25 12:12:48.793020] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.646 [2024-07-25 12:12:48.802221] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.647 [2024-07-25 12:12:48.802912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.647 [2024-07-25 12:12:48.802930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.647 [2024-07-25 12:12:48.802938] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.647 [2024-07-25 12:12:48.803119] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.647 [2024-07-25 12:12:48.803297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.647 [2024-07-25 12:12:48.803307] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.647 [2024-07-25 12:12:48.803314] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.647 [2024-07-25 12:12:48.806148] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.647 [2024-07-25 12:12:48.815342] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.647 [2024-07-25 12:12:48.816028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.647 [2024-07-25 12:12:48.816049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.647 [2024-07-25 12:12:48.816057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.647 [2024-07-25 12:12:48.816235] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.647 [2024-07-25 12:12:48.816413] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.647 [2024-07-25 12:12:48.816423] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.647 [2024-07-25 12:12:48.816430] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.647 [2024-07-25 12:12:48.819295] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.647 [2024-07-25 12:12:48.828410] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.647 [2024-07-25 12:12:48.829118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.647 [2024-07-25 12:12:48.829135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.647 [2024-07-25 12:12:48.829143] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.647 [2024-07-25 12:12:48.829320] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.647 [2024-07-25 12:12:48.829499] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.647 [2024-07-25 12:12:48.829509] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.647 [2024-07-25 12:12:48.829515] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.647 [2024-07-25 12:12:48.832352] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.647 [2024-07-25 12:12:48.841549] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.647 [2024-07-25 12:12:48.842257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.647 [2024-07-25 12:12:48.842275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.647 [2024-07-25 12:12:48.842282] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.647 [2024-07-25 12:12:48.842459] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.647 [2024-07-25 12:12:48.842638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.647 [2024-07-25 12:12:48.842648] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.647 [2024-07-25 12:12:48.842655] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.647 [2024-07-25 12:12:48.845489] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.647 [2024-07-25 12:12:48.854682] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.647 [2024-07-25 12:12:48.855386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.647 [2024-07-25 12:12:48.855403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.647 [2024-07-25 12:12:48.855411] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.647 [2024-07-25 12:12:48.855588] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.647 [2024-07-25 12:12:48.855765] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.647 [2024-07-25 12:12:48.855775] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.647 [2024-07-25 12:12:48.855782] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.647 [2024-07-25 12:12:48.858615] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.647 [2024-07-25 12:12:48.867812] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.647 [2024-07-25 12:12:48.868521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.647 [2024-07-25 12:12:48.868538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.647 [2024-07-25 12:12:48.868546] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.647 [2024-07-25 12:12:48.868724] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.647 [2024-07-25 12:12:48.868902] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.647 [2024-07-25 12:12:48.868912] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.647 [2024-07-25 12:12:48.868919] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.647 [2024-07-25 12:12:48.871745] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.647 [2024-07-25 12:12:48.880941] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.647 [2024-07-25 12:12:48.881663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.647 [2024-07-25 12:12:48.881680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.647 [2024-07-25 12:12:48.881692] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.647 [2024-07-25 12:12:48.881869] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.647 [2024-07-25 12:12:48.882054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.647 [2024-07-25 12:12:48.882064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.647 [2024-07-25 12:12:48.882071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.647 [2024-07-25 12:12:48.884899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.908 [2024-07-25 12:12:48.894130] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.908 [2024-07-25 12:12:48.894630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.908 [2024-07-25 12:12:48.894648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.908 [2024-07-25 12:12:48.894656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.908 [2024-07-25 12:12:48.894834] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.908 [2024-07-25 12:12:48.895013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.908 [2024-07-25 12:12:48.895023] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.908 [2024-07-25 12:12:48.895030] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.908 [2024-07-25 12:12:48.897865] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.908 [2024-07-25 12:12:48.907235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.909 [2024-07-25 12:12:48.907732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-25 12:12:48.907748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.909 [2024-07-25 12:12:48.907756] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.909 [2024-07-25 12:12:48.907934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.909 [2024-07-25 12:12:48.908117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.909 [2024-07-25 12:12:48.908127] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.909 [2024-07-25 12:12:48.908135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.909 [2024-07-25 12:12:48.910967] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.909 [2024-07-25 12:12:48.920334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.909 [2024-07-25 12:12:48.921018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-25 12:12:48.921035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.909 [2024-07-25 12:12:48.921048] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.909 [2024-07-25 12:12:48.921226] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.909 [2024-07-25 12:12:48.921404] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.909 [2024-07-25 12:12:48.921416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.909 [2024-07-25 12:12:48.921423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.909 [2024-07-25 12:12:48.924253] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.909 [2024-07-25 12:12:48.933515] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.909 [2024-07-25 12:12:48.934204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-25 12:12:48.934221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.909 [2024-07-25 12:12:48.934229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.909 [2024-07-25 12:12:48.934405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.909 [2024-07-25 12:12:48.934585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.909 [2024-07-25 12:12:48.934594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.909 [2024-07-25 12:12:48.934601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.909 [2024-07-25 12:12:48.937450] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.909 [2024-07-25 12:12:48.946643] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.909 [2024-07-25 12:12:48.947335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-25 12:12:48.947352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.909 [2024-07-25 12:12:48.947360] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.909 [2024-07-25 12:12:48.947537] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.909 [2024-07-25 12:12:48.947717] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.909 [2024-07-25 12:12:48.947727] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.909 [2024-07-25 12:12:48.947733] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.909 [2024-07-25 12:12:48.950563] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.909 [2024-07-25 12:12:48.959758] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.909 [2024-07-25 12:12:48.960401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-25 12:12:48.960418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.909 [2024-07-25 12:12:48.960425] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.909 [2024-07-25 12:12:48.960602] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.909 [2024-07-25 12:12:48.960779] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.909 [2024-07-25 12:12:48.960788] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.909 [2024-07-25 12:12:48.960795] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.909 [2024-07-25 12:12:48.963671] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.909 [2024-07-25 12:12:48.972859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.909 [2024-07-25 12:12:48.973366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-25 12:12:48.973383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.909 [2024-07-25 12:12:48.973390] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.909 [2024-07-25 12:12:48.973567] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.909 [2024-07-25 12:12:48.973745] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.909 [2024-07-25 12:12:48.973755] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.909 [2024-07-25 12:12:48.973762] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.909 [2024-07-25 12:12:48.976599] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.909 [2024-07-25 12:12:48.985970] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.909 [2024-07-25 12:12:48.986666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-25 12:12:48.986683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.909 [2024-07-25 12:12:48.986690] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.909 [2024-07-25 12:12:48.986866] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.909 [2024-07-25 12:12:48.987049] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.909 [2024-07-25 12:12:48.987059] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.909 [2024-07-25 12:12:48.987066] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.909 [2024-07-25 12:12:48.989899] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.909 [2024-07-25 12:12:48.999105] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.909 [2024-07-25 12:12:48.999816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-25 12:12:48.999833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.909 [2024-07-25 12:12:48.999840] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.909 [2024-07-25 12:12:49.000017] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.909 [2024-07-25 12:12:49.000201] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.909 [2024-07-25 12:12:49.000211] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.909 [2024-07-25 12:12:49.000218] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.909 [2024-07-25 12:12:49.003045] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.909 [2024-07-25 12:12:49.012238] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.909 [2024-07-25 12:12:49.012927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-25 12:12:49.012944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.909 [2024-07-25 12:12:49.012951] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.909 [2024-07-25 12:12:49.013137] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.909 [2024-07-25 12:12:49.013316] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.909 [2024-07-25 12:12:49.013325] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.909 [2024-07-25 12:12:49.013332] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.909 [2024-07-25 12:12:49.016161] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.909 [2024-07-25 12:12:49.025354] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.909 [2024-07-25 12:12:49.026061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.909 [2024-07-25 12:12:49.026078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.909 [2024-07-25 12:12:49.026086] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.909 [2024-07-25 12:12:49.026263] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.910 [2024-07-25 12:12:49.026440] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.910 [2024-07-25 12:12:49.026450] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.910 [2024-07-25 12:12:49.026457] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.910 [2024-07-25 12:12:49.029310] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.910 [2024-07-25 12:12:49.038552] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.910 [2024-07-25 12:12:49.039264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-25 12:12:49.039281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.910 [2024-07-25 12:12:49.039289] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.910 [2024-07-25 12:12:49.039467] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.910 [2024-07-25 12:12:49.039645] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.910 [2024-07-25 12:12:49.039654] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.910 [2024-07-25 12:12:49.039661] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.910 [2024-07-25 12:12:49.042491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.910 [2024-07-25 12:12:49.051684] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.910 [2024-07-25 12:12:49.052393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-25 12:12:49.052410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.910 [2024-07-25 12:12:49.052418] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.910 [2024-07-25 12:12:49.052596] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.910 [2024-07-25 12:12:49.052775] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.910 [2024-07-25 12:12:49.052784] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.910 [2024-07-25 12:12:49.052794] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.910 [2024-07-25 12:12:49.055628] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.910 [2024-07-25 12:12:49.064927] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.910 [2024-07-25 12:12:49.065622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-25 12:12:49.065639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.910 [2024-07-25 12:12:49.065647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.910 [2024-07-25 12:12:49.065842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.910 [2024-07-25 12:12:49.066026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.910 [2024-07-25 12:12:49.066036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.910 [2024-07-25 12:12:49.066049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.910 [2024-07-25 12:12:49.068931] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.910 [2024-07-25 12:12:49.078096] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.910 [2024-07-25 12:12:49.078737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-25 12:12:49.078753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.910 [2024-07-25 12:12:49.078760] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.910 [2024-07-25 12:12:49.078937] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.910 [2024-07-25 12:12:49.079119] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.910 [2024-07-25 12:12:49.079129] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.910 [2024-07-25 12:12:49.079136] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.910 [2024-07-25 12:12:49.081963] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.910 [2024-07-25 12:12:49.091161] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.910 [2024-07-25 12:12:49.091874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-25 12:12:49.091891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.910 [2024-07-25 12:12:49.091898] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.910 [2024-07-25 12:12:49.092082] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.910 [2024-07-25 12:12:49.092260] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.910 [2024-07-25 12:12:49.092270] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.910 [2024-07-25 12:12:49.092276] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.910 [2024-07-25 12:12:49.095120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.910 [2024-07-25 12:12:49.104334] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.910 [2024-07-25 12:12:49.104977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-25 12:12:49.104993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.910 [2024-07-25 12:12:49.105001] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.910 [2024-07-25 12:12:49.105183] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.910 [2024-07-25 12:12:49.105362] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.910 [2024-07-25 12:12:49.105372] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.910 [2024-07-25 12:12:49.105379] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.910 [2024-07-25 12:12:49.108211] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.910 [2024-07-25 12:12:49.117403] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.910 [2024-07-25 12:12:49.118025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-25 12:12:49.118049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.910 [2024-07-25 12:12:49.118057] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.910 [2024-07-25 12:12:49.118234] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.910 [2024-07-25 12:12:49.118411] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.910 [2024-07-25 12:12:49.118420] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.910 [2024-07-25 12:12:49.118427] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.910 [2024-07-25 12:12:49.121257] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.910 [2024-07-25 12:12:49.130452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.910 [2024-07-25 12:12:49.131156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-25 12:12:49.131173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.910 [2024-07-25 12:12:49.131181] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.910 [2024-07-25 12:12:49.131359] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.910 [2024-07-25 12:12:49.131538] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.910 [2024-07-25 12:12:49.131547] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.910 [2024-07-25 12:12:49.131554] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.910 [2024-07-25 12:12:49.134389] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.910 [2024-07-25 12:12:49.143586] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:01.910 [2024-07-25 12:12:49.144274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:01.910 [2024-07-25 12:12:49.144292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:01.910 [2024-07-25 12:12:49.144299] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:01.910 [2024-07-25 12:12:49.144476] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:01.910 [2024-07-25 12:12:49.144657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:01.910 [2024-07-25 12:12:49.144667] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:01.910 [2024-07-25 12:12:49.144674] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:01.910 [2024-07-25 12:12:49.147505] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:01.910 [2024-07-25 12:12:49.156732] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.172 [2024-07-25 12:12:49.157374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.172 [2024-07-25 12:12:49.157393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.172 [2024-07-25 12:12:49.157401] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.172 [2024-07-25 12:12:49.157579] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.172 [2024-07-25 12:12:49.157758] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.172 [2024-07-25 12:12:49.157767] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.172 [2024-07-25 12:12:49.157774] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.172 [2024-07-25 12:12:49.160606] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.172 [2024-07-25 12:12:49.169806] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.172 [2024-07-25 12:12:49.170523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.172 [2024-07-25 12:12:49.170567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.172 [2024-07-25 12:12:49.170590] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.172 [2024-07-25 12:12:49.170997] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.172 [2024-07-25 12:12:49.171180] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.172 [2024-07-25 12:12:49.171190] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.172 [2024-07-25 12:12:49.171196] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.172 [2024-07-25 12:12:49.174026] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.172 [2024-07-25 12:12:49.182892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.172 [2024-07-25 12:12:49.183534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.172 [2024-07-25 12:12:49.183551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.172 [2024-07-25 12:12:49.183558] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.172 [2024-07-25 12:12:49.183736] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.172 [2024-07-25 12:12:49.183913] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.172 [2024-07-25 12:12:49.183923] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.172 [2024-07-25 12:12:49.183930] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.172 [2024-07-25 12:12:49.186769] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.172 [2024-07-25 12:12:49.195844] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.172 [2024-07-25 12:12:49.196515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.172 [2024-07-25 12:12:49.196531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.172 [2024-07-25 12:12:49.196538] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.172 [2024-07-25 12:12:49.196701] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.172 [2024-07-25 12:12:49.196864] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.172 [2024-07-25 12:12:49.196873] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.172 [2024-07-25 12:12:49.196879] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.172 [2024-07-25 12:12:49.199570] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.172 [2024-07-25 12:12:49.208694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.172 [2024-07-25 12:12:49.209399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.172 [2024-07-25 12:12:49.209442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.172 [2024-07-25 12:12:49.209464] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.172 [2024-07-25 12:12:49.209984] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.172 [2024-07-25 12:12:49.210172] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.172 [2024-07-25 12:12:49.210182] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.172 [2024-07-25 12:12:49.210189] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.172 [2024-07-25 12:12:49.212856] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.172 [2024-07-25 12:12:49.221525] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.172 [2024-07-25 12:12:49.222157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.172 [2024-07-25 12:12:49.222199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.172 [2024-07-25 12:12:49.222222] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.172 [2024-07-25 12:12:49.222512] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.172 [2024-07-25 12:12:49.222676] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.172 [2024-07-25 12:12:49.222685] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.172 [2024-07-25 12:12:49.222691] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.172 [2024-07-25 12:12:49.225388] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.172 [2024-07-25 12:12:49.234465] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.172 [2024-07-25 12:12:49.235081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.172 [2024-07-25 12:12:49.235123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.172 [2024-07-25 12:12:49.235153] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.172 [2024-07-25 12:12:49.235733] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.172 [2024-07-25 12:12:49.236139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.172 [2024-07-25 12:12:49.236149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.172 [2024-07-25 12:12:49.236156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.172 [2024-07-25 12:12:49.239024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.172 [2024-07-25 12:12:49.247324] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.172 [2024-07-25 12:12:49.248055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.172 [2024-07-25 12:12:49.248098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.172 [2024-07-25 12:12:49.248121] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.172 [2024-07-25 12:12:49.248441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.172 [2024-07-25 12:12:49.248605] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.172 [2024-07-25 12:12:49.248614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.172 [2024-07-25 12:12:49.248620] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.172 [2024-07-25 12:12:49.251240] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.172 [2024-07-25 12:12:49.260227] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.172 [2024-07-25 12:12:49.260923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.172 [2024-07-25 12:12:49.260964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.172 [2024-07-25 12:12:49.260986] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.172 [2024-07-25 12:12:49.261423] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.172 [2024-07-25 12:12:49.261598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.173 [2024-07-25 12:12:49.261607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.173 [2024-07-25 12:12:49.261613] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.173 [2024-07-25 12:12:49.264259] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.173 [2024-07-25 12:12:49.273013] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.173 [2024-07-25 12:12:49.273425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.173 [2024-07-25 12:12:49.273477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.173 [2024-07-25 12:12:49.273501] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.173 [2024-07-25 12:12:49.274094] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.173 [2024-07-25 12:12:49.274279] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.173 [2024-07-25 12:12:49.274290] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.173 [2024-07-25 12:12:49.274297] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.173 [2024-07-25 12:12:49.276886] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.173 [2024-07-25 12:12:49.285852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.173 [2024-07-25 12:12:49.286556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.173 [2024-07-25 12:12:49.286598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.173 [2024-07-25 12:12:49.286620] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.173 [2024-07-25 12:12:49.287213] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.173 [2024-07-25 12:12:49.287609] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.173 [2024-07-25 12:12:49.287619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.173 [2024-07-25 12:12:49.287625] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.173 [2024-07-25 12:12:49.290269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.173 [2024-07-25 12:12:49.298696] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.173 [2024-07-25 12:12:49.299395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.173 [2024-07-25 12:12:49.299440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.173 [2024-07-25 12:12:49.299463] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.173 [2024-07-25 12:12:49.300041] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.173 [2024-07-25 12:12:49.300544] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.173 [2024-07-25 12:12:49.300554] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.173 [2024-07-25 12:12:49.300560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.173 [2024-07-25 12:12:49.303304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.173 [2024-07-25 12:12:49.311530] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.173 [2024-07-25 12:12:49.312238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.173 [2024-07-25 12:12:49.312282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.173 [2024-07-25 12:12:49.312305] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.173 [2024-07-25 12:12:49.312885] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.173 [2024-07-25 12:12:49.313074] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.173 [2024-07-25 12:12:49.313084] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.173 [2024-07-25 12:12:49.313107] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.173 [2024-07-25 12:12:49.316995] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.173 [2024-07-25 12:12:49.325237] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.173 [2024-07-25 12:12:49.325950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.173 [2024-07-25 12:12:49.325994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.173 [2024-07-25 12:12:49.326016] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.173 [2024-07-25 12:12:49.326368] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.173 [2024-07-25 12:12:49.326542] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.173 [2024-07-25 12:12:49.326552] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.173 [2024-07-25 12:12:49.326559] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.173 [2024-07-25 12:12:49.329323] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.173 [2024-07-25 12:12:49.338109] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.173 [2024-07-25 12:12:49.338811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.173 [2024-07-25 12:12:49.338853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.173 [2024-07-25 12:12:49.338875] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.173 [2024-07-25 12:12:49.339469] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.173 [2024-07-25 12:12:49.339807] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.173 [2024-07-25 12:12:49.339816] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.173 [2024-07-25 12:12:49.339823] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.173 [2024-07-25 12:12:49.342415] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.173 [2024-07-25 12:12:49.350984] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.173 [2024-07-25 12:12:49.351687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.173 [2024-07-25 12:12:49.351729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.173 [2024-07-25 12:12:49.351751] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.173 [2024-07-25 12:12:49.352343] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.173 [2024-07-25 12:12:49.352663] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.173 [2024-07-25 12:12:49.352672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.173 [2024-07-25 12:12:49.352678] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.173 [2024-07-25 12:12:49.355272] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.173 [2024-07-25 12:12:49.363880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.173 [2024-07-25 12:12:49.364582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.173 [2024-07-25 12:12:49.364624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.173 [2024-07-25 12:12:49.364647] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.173 [2024-07-25 12:12:49.364900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.173 [2024-07-25 12:12:49.365085] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.173 [2024-07-25 12:12:49.365095] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.173 [2024-07-25 12:12:49.365101] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.173 [2024-07-25 12:12:49.367771] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.173 [2024-07-25 12:12:49.376804] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.173 [2024-07-25 12:12:49.377510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.173 [2024-07-25 12:12:49.377553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.173 [2024-07-25 12:12:49.377575] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.173 [2024-07-25 12:12:49.377831] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.173 [2024-07-25 12:12:49.377995] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.173 [2024-07-25 12:12:49.378004] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.173 [2024-07-25 12:12:49.378010] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.173 [2024-07-25 12:12:49.380700] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.173 [2024-07-25 12:12:49.389666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.173 [2024-07-25 12:12:49.390354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.173 [2024-07-25 12:12:49.390389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.173 [2024-07-25 12:12:49.390413] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.173 [2024-07-25 12:12:49.390992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.173 [2024-07-25 12:12:49.391591] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.173 [2024-07-25 12:12:49.391619] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.174 [2024-07-25 12:12:49.391640] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.174 [2024-07-25 12:12:49.394304] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.174 [2024-07-25 12:12:49.402568] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.174 [2024-07-25 12:12:49.403176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.174 [2024-07-25 12:12:49.403218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.174 [2024-07-25 12:12:49.403239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.174 [2024-07-25 12:12:49.403624] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.174 [2024-07-25 12:12:49.403788] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.174 [2024-07-25 12:12:49.403797] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.174 [2024-07-25 12:12:49.403809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.174 [2024-07-25 12:12:49.406501] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.174 [2024-07-25 12:12:49.415579] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.174 [2024-07-25 12:12:49.416333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.174 [2024-07-25 12:12:49.416378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.174 [2024-07-25 12:12:49.416400] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.174 [2024-07-25 12:12:49.416980] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.174 [2024-07-25 12:12:49.417275] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.174 [2024-07-25 12:12:49.417285] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.174 [2024-07-25 12:12:49.417292] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.174 [2024-07-25 12:12:49.420072] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.435 [2024-07-25 12:12:49.428611] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.435 [2024-07-25 12:12:49.429323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.435 [2024-07-25 12:12:49.429339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.435 [2024-07-25 12:12:49.429346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.435 [2024-07-25 12:12:49.429508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.435 [2024-07-25 12:12:49.429671] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.435 [2024-07-25 12:12:49.429680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.435 [2024-07-25 12:12:49.429687] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.435 [2024-07-25 12:12:49.432378] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.435 [2024-07-25 12:12:49.441597] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.435 [2024-07-25 12:12:49.442330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.435 [2024-07-25 12:12:49.442375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.435 [2024-07-25 12:12:49.442397] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.435 [2024-07-25 12:12:49.442900] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.435 [2024-07-25 12:12:49.443069] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.435 [2024-07-25 12:12:49.443078] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.435 [2024-07-25 12:12:49.443085] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.435 [2024-07-25 12:12:49.445773] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.435 [2024-07-25 12:12:49.454445] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.435 [2024-07-25 12:12:49.455149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.435 [2024-07-25 12:12:49.455192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.435 [2024-07-25 12:12:49.455215] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.435 [2024-07-25 12:12:49.455793] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.435 [2024-07-25 12:12:49.456001] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.435 [2024-07-25 12:12:49.456010] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.435 [2024-07-25 12:12:49.456018] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.435 [2024-07-25 12:12:49.458712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.435 [2024-07-25 12:12:49.467427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.435 [2024-07-25 12:12:49.468125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.435 [2024-07-25 12:12:49.468167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.435 [2024-07-25 12:12:49.468189] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.435 [2024-07-25 12:12:49.468768] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.435 [2024-07-25 12:12:49.469139] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.435 [2024-07-25 12:12:49.469149] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.435 [2024-07-25 12:12:49.469156] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.435 [2024-07-25 12:12:49.471825] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.435 [2024-07-25 12:12:49.480389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.435 [2024-07-25 12:12:49.480791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.435 [2024-07-25 12:12:49.480807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.435 [2024-07-25 12:12:49.480814] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.435 [2024-07-25 12:12:49.480977] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.435 [2024-07-25 12:12:49.481166] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.435 [2024-07-25 12:12:49.481176] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.435 [2024-07-25 12:12:49.481183] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.435 [2024-07-25 12:12:49.483857] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.435 [2024-07-25 12:12:49.493259] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.435 [2024-07-25 12:12:49.493895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.435 [2024-07-25 12:12:49.493911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.435 [2024-07-25 12:12:49.493919] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.435 [2024-07-25 12:12:49.494100] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.435 [2024-07-25 12:12:49.494274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.436 [2024-07-25 12:12:49.494283] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.436 [2024-07-25 12:12:49.494290] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.436 [2024-07-25 12:12:49.497156] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.436 [2024-07-25 12:12:49.506176] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.436 [2024-07-25 12:12:49.506894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.436 [2024-07-25 12:12:49.506937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.436 [2024-07-25 12:12:49.506959] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.436 [2024-07-25 12:12:49.507556] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.436 [2024-07-25 12:12:49.508008] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.436 [2024-07-25 12:12:49.508018] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.436 [2024-07-25 12:12:49.508024] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.436 [2024-07-25 12:12:49.510718] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.436 [2024-07-25 12:12:49.519113] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.436 [2024-07-25 12:12:49.519749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.436 [2024-07-25 12:12:49.519792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.436 [2024-07-25 12:12:49.519815] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.436 [2024-07-25 12:12:49.520208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.436 [2024-07-25 12:12:49.520386] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.436 [2024-07-25 12:12:49.520396] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.436 [2024-07-25 12:12:49.520403] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.436 [2024-07-25 12:12:49.523102] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.436 [2024-07-25 12:12:49.532016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.436 [2024-07-25 12:12:49.532711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.436 [2024-07-25 12:12:49.532749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.436 [2024-07-25 12:12:49.532772] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.436 [2024-07-25 12:12:49.533364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.436 [2024-07-25 12:12:49.533573] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.436 [2024-07-25 12:12:49.533583] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.436 [2024-07-25 12:12:49.533589] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.436 [2024-07-25 12:12:49.536365] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.436 [2024-07-25 12:12:49.544859] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.436 [2024-07-25 12:12:49.545572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.436 [2024-07-25 12:12:49.545616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.436 [2024-07-25 12:12:49.545638] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.436 [2024-07-25 12:12:49.545993] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.436 [2024-07-25 12:12:49.546184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.436 [2024-07-25 12:12:49.546194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.436 [2024-07-25 12:12:49.546201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.436 [2024-07-25 12:12:49.548864] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.436 [2024-07-25 12:12:49.557687] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.436 [2024-07-25 12:12:49.558390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.436 [2024-07-25 12:12:49.558433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.436 [2024-07-25 12:12:49.558455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.436 [2024-07-25 12:12:49.559032] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.436 [2024-07-25 12:12:49.559294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.436 [2024-07-25 12:12:49.559304] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.436 [2024-07-25 12:12:49.559310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.436 [2024-07-25 12:12:49.561972] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.436 [2024-07-25 12:12:49.570477] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.436 [2024-07-25 12:12:49.571166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.436 [2024-07-25 12:12:49.571209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.436 [2024-07-25 12:12:49.571231] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.436 [2024-07-25 12:12:49.571809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.436 [2024-07-25 12:12:49.572221] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.436 [2024-07-25 12:12:49.572231] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.436 [2024-07-25 12:12:49.572238] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.436 [2024-07-25 12:12:49.574903] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.436 [2024-07-25 12:12:49.583275] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.436 [2024-07-25 12:12:49.583968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.436 [2024-07-25 12:12:49.584017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.436 [2024-07-25 12:12:49.584040] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.436 [2024-07-25 12:12:49.584548] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.436 [2024-07-25 12:12:49.584721] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.436 [2024-07-25 12:12:49.584731] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.436 [2024-07-25 12:12:49.584737] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.436 [2024-07-25 12:12:49.587377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.436 [2024-07-25 12:12:49.596204] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.436 [2024-07-25 12:12:49.596841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.436 [2024-07-25 12:12:49.596884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.436 [2024-07-25 12:12:49.596906] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.436 [2024-07-25 12:12:49.597500] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.436 [2024-07-25 12:12:49.598092] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.436 [2024-07-25 12:12:49.598118] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.436 [2024-07-25 12:12:49.598139] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.436 [2024-07-25 12:12:49.600821] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.436 [2024-07-25 12:12:49.609122] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.436 [2024-07-25 12:12:49.609822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.436 [2024-07-25 12:12:49.609864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.436 [2024-07-25 12:12:49.609886] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.436 [2024-07-25 12:12:49.610303] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.436 [2024-07-25 12:12:49.610478] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.436 [2024-07-25 12:12:49.610487] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.436 [2024-07-25 12:12:49.610493] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.436 [2024-07-25 12:12:49.613198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.436 [2024-07-25 12:12:49.621959] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.436 [2024-07-25 12:12:49.622591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.436 [2024-07-25 12:12:49.622633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.436 [2024-07-25 12:12:49.622655] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.436 [2024-07-25 12:12:49.623051] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.436 [2024-07-25 12:12:49.623242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.437 [2024-07-25 12:12:49.623252] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.437 [2024-07-25 12:12:49.623258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.437 [2024-07-25 12:12:49.625919] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.437 [2024-07-25 12:12:49.634778] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.437 [2024-07-25 12:12:49.635478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.437 [2024-07-25 12:12:49.635520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.437 [2024-07-25 12:12:49.635542] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.437 [2024-07-25 12:12:49.635825] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.437 [2024-07-25 12:12:49.635989] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.437 [2024-07-25 12:12:49.635998] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.437 [2024-07-25 12:12:49.636004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.437 [2024-07-25 12:12:49.639873] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.437 [2024-07-25 12:12:49.648389] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.437 [2024-07-25 12:12:49.649107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.437 [2024-07-25 12:12:49.649151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.437 [2024-07-25 12:12:49.649173] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.437 [2024-07-25 12:12:49.649457] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.437 [2024-07-25 12:12:49.649626] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.437 [2024-07-25 12:12:49.649635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.437 [2024-07-25 12:12:49.649641] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.437 [2024-07-25 12:12:49.652371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.437 [2024-07-25 12:12:49.661235] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.437 [2024-07-25 12:12:49.661939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.437 [2024-07-25 12:12:49.661982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.437 [2024-07-25 12:12:49.662004] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.437 [2024-07-25 12:12:49.662382] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.437 [2024-07-25 12:12:49.662556] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.437 [2024-07-25 12:12:49.662565] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.437 [2024-07-25 12:12:49.662572] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.437 [2024-07-25 12:12:49.665245] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.437 [2024-07-25 12:12:49.674173] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.437 [2024-07-25 12:12:49.674880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.437 [2024-07-25 12:12:49.674923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.437 [2024-07-25 12:12:49.674945] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.437 [2024-07-25 12:12:49.675385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.437 [2024-07-25 12:12:49.675560] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.437 [2024-07-25 12:12:49.675570] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.437 [2024-07-25 12:12:49.675576] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.437 [2024-07-25 12:12:49.678269] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.698 [2024-07-25 12:12:49.687169] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.698 [2024-07-25 12:12:49.687886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.698 [2024-07-25 12:12:49.687928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.698 [2024-07-25 12:12:49.687950] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.698 [2024-07-25 12:12:49.688337] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.698 [2024-07-25 12:12:49.688512] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.698 [2024-07-25 12:12:49.688522] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.698 [2024-07-25 12:12:49.688529] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.698 [2024-07-25 12:12:49.691278] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.698 [2024-07-25 12:12:49.700073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.698 [2024-07-25 12:12:49.700714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.698 [2024-07-25 12:12:49.700756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.698 [2024-07-25 12:12:49.700778] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.698 [2024-07-25 12:12:49.701218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.698 [2024-07-25 12:12:49.701392] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.698 [2024-07-25 12:12:49.701401] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.698 [2024-07-25 12:12:49.701408] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.698 [2024-07-25 12:12:49.704155] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.698 [2024-07-25 12:12:49.712972] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.698 [2024-07-25 12:12:49.713678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.698 [2024-07-25 12:12:49.713720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.698 [2024-07-25 12:12:49.713748] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.698 [2024-07-25 12:12:49.714338] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.698 [2024-07-25 12:12:49.714879] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.698 [2024-07-25 12:12:49.714889] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.698 [2024-07-25 12:12:49.714896] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.698 [2024-07-25 12:12:49.717526] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.698 [2024-07-25 12:12:49.725761] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.698 [2024-07-25 12:12:49.726387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.698 [2024-07-25 12:12:49.726432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.698 [2024-07-25 12:12:49.726455] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.698 [2024-07-25 12:12:49.726974] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.698 [2024-07-25 12:12:49.727234] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.698 [2024-07-25 12:12:49.727247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.698 [2024-07-25 12:12:49.727257] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.698 [2024-07-25 12:12:49.731371] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.698 [2024-07-25 12:12:49.739119] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.698 [2024-07-25 12:12:49.739825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.698 [2024-07-25 12:12:49.739868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.699 [2024-07-25 12:12:49.739889] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.699 [2024-07-25 12:12:49.740309] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.699 [2024-07-25 12:12:49.740483] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.699 [2024-07-25 12:12:49.740493] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.699 [2024-07-25 12:12:49.740499] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.699 [2024-07-25 12:12:49.743202] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.699 [2024-07-25 12:12:49.751950] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.699 [2024-07-25 12:12:49.752681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.699 [2024-07-25 12:12:49.752725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.699 [2024-07-25 12:12:49.752747] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.699 [2024-07-25 12:12:49.753060] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.699 [2024-07-25 12:12:49.753251] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.699 [2024-07-25 12:12:49.753261] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.699 [2024-07-25 12:12:49.753270] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.699 [2024-07-25 12:12:49.756143] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.699 [2024-07-25 12:12:49.764917] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.699 [2024-07-25 12:12:49.765547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.699 [2024-07-25 12:12:49.765591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.699 [2024-07-25 12:12:49.765614] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.699 [2024-07-25 12:12:49.766208] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.699 [2024-07-25 12:12:49.766638] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.699 [2024-07-25 12:12:49.766647] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.699 [2024-07-25 12:12:49.766653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.699 [2024-07-25 12:12:49.769275] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.699 [2024-07-25 12:12:49.777760] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.699 [2024-07-25 12:12:49.778458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.699 [2024-07-25 12:12:49.778500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.699 [2024-07-25 12:12:49.778522] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.699 [2024-07-25 12:12:49.779112] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.699 [2024-07-25 12:12:49.779679] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.699 [2024-07-25 12:12:49.779689] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.699 [2024-07-25 12:12:49.779695] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.699 [2024-07-25 12:12:49.782340] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.699 [2024-07-25 12:12:49.790693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.699 [2024-07-25 12:12:49.791384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.699 [2024-07-25 12:12:49.791401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.699 [2024-07-25 12:12:49.791407] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.699 [2024-07-25 12:12:49.791569] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.699 [2024-07-25 12:12:49.791732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.699 [2024-07-25 12:12:49.791741] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.699 [2024-07-25 12:12:49.791747] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.699 [2024-07-25 12:12:49.794438] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.699 [2024-07-25 12:12:49.803640] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.699 [2024-07-25 12:12:49.804359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.699 [2024-07-25 12:12:49.804374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.699 [2024-07-25 12:12:49.804381] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.699 [2024-07-25 12:12:49.804543] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.699 [2024-07-25 12:12:49.804705] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.699 [2024-07-25 12:12:49.804713] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.699 [2024-07-25 12:12:49.804719] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.699 [2024-07-25 12:12:49.807411] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.699 [2024-07-25 12:12:49.816518] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.699 [2024-07-25 12:12:49.817227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.699 [2024-07-25 12:12:49.817271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.699 [2024-07-25 12:12:49.817292] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.699 [2024-07-25 12:12:49.817769] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.699 [2024-07-25 12:12:49.817934] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.699 [2024-07-25 12:12:49.817943] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.699 [2024-07-25 12:12:49.817950] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.699 [2024-07-25 12:12:49.820641] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.699 [2024-07-25 12:12:49.829338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.699 [2024-07-25 12:12:49.830038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.699 [2024-07-25 12:12:49.830093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.699 [2024-07-25 12:12:49.830114] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.699 [2024-07-25 12:12:49.830506] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.699 [2024-07-25 12:12:49.830670] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.699 [2024-07-25 12:12:49.830680] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.699 [2024-07-25 12:12:49.830686] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.699 [2024-07-25 12:12:49.833377] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.699 [2024-07-25 12:12:49.842196] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.699 [2024-07-25 12:12:49.842827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.699 [2024-07-25 12:12:49.842870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.699 [2024-07-25 12:12:49.842893] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.699 [2024-07-25 12:12:49.843493] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.699 [2024-07-25 12:12:49.843848] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.699 [2024-07-25 12:12:49.843858] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.699 [2024-07-25 12:12:49.843865] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.699 [2024-07-25 12:12:49.846495] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.699 [2024-07-25 12:12:49.855019] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.699 [2024-07-25 12:12:49.855704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.699 [2024-07-25 12:12:49.855748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.699 [2024-07-25 12:12:49.855769] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.699 [2024-07-25 12:12:49.856364] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.699 [2024-07-25 12:12:49.856842] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.699 [2024-07-25 12:12:49.856852] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.699 [2024-07-25 12:12:49.856858] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.699 [2024-07-25 12:12:49.859491] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.699 [2024-07-25 12:12:49.867881] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.699 [2024-07-25 12:12:49.868591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.699 [2024-07-25 12:12:49.868635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.700 [2024-07-25 12:12:49.868656] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.700 [2024-07-25 12:12:49.868975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.700 [2024-07-25 12:12:49.869165] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.700 [2024-07-25 12:12:49.869175] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.700 [2024-07-25 12:12:49.869182] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.700 [2024-07-25 12:12:49.872001] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.700 [2024-07-25 12:12:49.880709] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.700 [2024-07-25 12:12:49.881423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.700 [2024-07-25 12:12:49.881465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.700 [2024-07-25 12:12:49.881486] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.700 [2024-07-25 12:12:49.882087] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.700 [2024-07-25 12:12:49.882367] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.700 [2024-07-25 12:12:49.882377] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.700 [2024-07-25 12:12:49.882384] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.700 [2024-07-25 12:12:49.885039] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.700 [2024-07-25 12:12:49.893554] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.700 [2024-07-25 12:12:49.894260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.700 [2024-07-25 12:12:49.894302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.700 [2024-07-25 12:12:49.894325] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.700 [2024-07-25 12:12:49.894903] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.700 [2024-07-25 12:12:49.895197] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.700 [2024-07-25 12:12:49.895207] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.700 [2024-07-25 12:12:49.895213] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.700 [2024-07-25 12:12:49.897880] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.700 [2024-07-25 12:12:49.906483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.700 [2024-07-25 12:12:49.907188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.700 [2024-07-25 12:12:49.907230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.700 [2024-07-25 12:12:49.907252] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.700 [2024-07-25 12:12:49.907842] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.700 [2024-07-25 12:12:49.908006] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.700 [2024-07-25 12:12:49.908015] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.700 [2024-07-25 12:12:49.908022] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.700 [2024-07-25 12:12:49.910716] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.700 [2024-07-25 12:12:49.919563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.700 [2024-07-25 12:12:49.920259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.700 [2024-07-25 12:12:49.920302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.700 [2024-07-25 12:12:49.920324] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.700 [2024-07-25 12:12:49.920901] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.700 [2024-07-25 12:12:49.921246] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.700 [2024-07-25 12:12:49.921256] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.700 [2024-07-25 12:12:49.921262] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.700 [2024-07-25 12:12:49.924009] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.700 [2024-07-25 12:12:49.932350] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.700 [2024-07-25 12:12:49.933064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.700 [2024-07-25 12:12:49.933115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.700 [2024-07-25 12:12:49.933137] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.700 [2024-07-25 12:12:49.933715] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.700 [2024-07-25 12:12:49.934148] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.700 [2024-07-25 12:12:49.934158] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.700 [2024-07-25 12:12:49.934165] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.700 [2024-07-25 12:12:49.936833] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.700 [2024-07-25 12:12:49.945452] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.700 [2024-07-25 12:12:49.946079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.700 [2024-07-25 12:12:49.946096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.700 [2024-07-25 12:12:49.946103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.700 [2024-07-25 12:12:49.946282] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.700 [2024-07-25 12:12:49.946446] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.700 [2024-07-25 12:12:49.946455] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.700 [2024-07-25 12:12:49.946462] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.996 [2024-07-25 12:12:49.949244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.996 [2024-07-25 12:12:49.958633] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.996 [2024-07-25 12:12:49.959347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.996 [2024-07-25 12:12:49.959389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.996 [2024-07-25 12:12:49.959410] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.996 [2024-07-25 12:12:49.959992] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.996 [2024-07-25 12:12:49.960255] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.996 [2024-07-25 12:12:49.960269] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.996 [2024-07-25 12:12:49.960278] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.996 [2024-07-25 12:12:49.964349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.996 [2024-07-25 12:12:49.972095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.996 [2024-07-25 12:12:49.972748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.996 [2024-07-25 12:12:49.972791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.996 [2024-07-25 12:12:49.972813] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.996 [2024-07-25 12:12:49.973197] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.996 [2024-07-25 12:12:49.973374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.996 [2024-07-25 12:12:49.973384] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.996 [2024-07-25 12:12:49.973392] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.996 [2024-07-25 12:12:49.976144] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.997 [2024-07-25 12:12:49.985083] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.997 [2024-07-25 12:12:49.985770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.997 [2024-07-25 12:12:49.985813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.997 [2024-07-25 12:12:49.985834] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.997 [2024-07-25 12:12:49.986210] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.997 [2024-07-25 12:12:49.986374] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.997 [2024-07-25 12:12:49.986383] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.997 [2024-07-25 12:12:49.986390] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.997 [2024-07-25 12:12:49.989080] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.997 [2024-07-25 12:12:49.997940] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.997 [2024-07-25 12:12:49.998650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.997 [2024-07-25 12:12:49.998693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.997 [2024-07-25 12:12:49.998716] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.997 [2024-07-25 12:12:49.998968] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.997 [2024-07-25 12:12:49.999156] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.997 [2024-07-25 12:12:49.999167] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.997 [2024-07-25 12:12:49.999175] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.997 [2024-07-25 12:12:50.001980] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.997 [2024-07-25 12:12:50.011005] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.997 [2024-07-25 12:12:50.011634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.997 [2024-07-25 12:12:50.011651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.997 [2024-07-25 12:12:50.011658] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.997 [2024-07-25 12:12:50.011836] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.997 [2024-07-25 12:12:50.012013] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.997 [2024-07-25 12:12:50.012022] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.997 [2024-07-25 12:12:50.012029] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.997 [2024-07-25 12:12:50.015242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.997 [2024-07-25 12:12:50.024110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.997 [2024-07-25 12:12:50.024734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.997 [2024-07-25 12:12:50.024752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.997 [2024-07-25 12:12:50.024761] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.997 [2024-07-25 12:12:50.024938] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.997 [2024-07-25 12:12:50.025124] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.997 [2024-07-25 12:12:50.025135] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.997 [2024-07-25 12:12:50.025142] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.997 [2024-07-25 12:12:50.027901] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.997 [2024-07-25 12:12:50.037268] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.997 [2024-07-25 12:12:50.037983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.997 [2024-07-25 12:12:50.038001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.997 [2024-07-25 12:12:50.038008] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.997 [2024-07-25 12:12:50.038191] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.997 [2024-07-25 12:12:50.038371] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.997 [2024-07-25 12:12:50.038381] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.997 [2024-07-25 12:12:50.038387] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.997 [2024-07-25 12:12:50.041168] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.997 [2024-07-25 12:12:50.050312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.997 [2024-07-25 12:12:50.050945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.997 [2024-07-25 12:12:50.050961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.997 [2024-07-25 12:12:50.050970] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.997 [2024-07-25 12:12:50.051149] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.997 [2024-07-25 12:12:50.051322] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.997 [2024-07-25 12:12:50.051331] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.997 [2024-07-25 12:12:50.051338] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.997 [2024-07-25 12:12:50.054125] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.997 [2024-07-25 12:12:50.063372] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.997 [2024-07-25 12:12:50.064057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.997 [2024-07-25 12:12:50.064073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.997 [2024-07-25 12:12:50.064103] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.997 [2024-07-25 12:12:50.064288] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.997 [2024-07-25 12:12:50.064461] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.997 [2024-07-25 12:12:50.064470] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.997 [2024-07-25 12:12:50.064477] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.997 [2024-07-25 12:12:50.067225] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.997 [2024-07-25 12:12:50.077499] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.997 [2024-07-25 12:12:50.078222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.997 [2024-07-25 12:12:50.078241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.997 [2024-07-25 12:12:50.078250] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.997 [2024-07-25 12:12:50.078441] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.997 [2024-07-25 12:12:50.078625] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.997 [2024-07-25 12:12:50.078635] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.997 [2024-07-25 12:12:50.078642] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.997 [2024-07-25 12:12:50.081452] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.997 [2024-07-25 12:12:50.090572] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.997 [2024-07-25 12:12:50.091284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.997 [2024-07-25 12:12:50.091301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.997 [2024-07-25 12:12:50.091309] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.997 [2024-07-25 12:12:50.091487] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.997 [2024-07-25 12:12:50.091666] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.997 [2024-07-25 12:12:50.091675] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.997 [2024-07-25 12:12:50.091682] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.997 [2024-07-25 12:12:50.094503] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.997 [2024-07-25 12:12:50.103702] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.997 [2024-07-25 12:12:50.104416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.997 [2024-07-25 12:12:50.104459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.997 [2024-07-25 12:12:50.104480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.997 [2024-07-25 12:12:50.105075] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.997 [2024-07-25 12:12:50.105657] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.997 [2024-07-25 12:12:50.105682] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.997 [2024-07-25 12:12:50.105710] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.997 [2024-07-25 12:12:50.108661] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.997 [2024-07-25 12:12:50.116820] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.998 [2024-07-25 12:12:50.117526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.998 [2024-07-25 12:12:50.117543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.998 [2024-07-25 12:12:50.117551] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.998 [2024-07-25 12:12:50.117728] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.998 [2024-07-25 12:12:50.117892] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.998 [2024-07-25 12:12:50.117902] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.998 [2024-07-25 12:12:50.117909] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.998 [2024-07-25 12:12:50.120712] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.998 [2024-07-25 12:12:50.129922] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.998 [2024-07-25 12:12:50.130549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.998 [2024-07-25 12:12:50.130565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.998 [2024-07-25 12:12:50.130573] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.998 [2024-07-25 12:12:50.130745] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.998 [2024-07-25 12:12:50.130919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.998 [2024-07-25 12:12:50.130929] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.998 [2024-07-25 12:12:50.130935] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.998 [2024-07-25 12:12:50.133747] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.998 [2024-07-25 12:12:50.142988] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.998 [2024-07-25 12:12:50.143720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.998 [2024-07-25 12:12:50.143764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.998 [2024-07-25 12:12:50.143786] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.998 [2024-07-25 12:12:50.144381] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.998 [2024-07-25 12:12:50.144640] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.998 [2024-07-25 12:12:50.144649] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.998 [2024-07-25 12:12:50.144656] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.998 [2024-07-25 12:12:50.147432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.998 [2024-07-25 12:12:50.155973] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.998 [2024-07-25 12:12:50.156695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.998 [2024-07-25 12:12:50.156741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.998 [2024-07-25 12:12:50.156763] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.998 [2024-07-25 12:12:50.157355] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.998 [2024-07-25 12:12:50.157732] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.998 [2024-07-25 12:12:50.157742] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.998 [2024-07-25 12:12:50.157749] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.998 [2024-07-25 12:12:50.160594] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.998 [2024-07-25 12:12:50.169110] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.998 [2024-07-25 12:12:50.169801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.998 [2024-07-25 12:12:50.169818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.998 [2024-07-25 12:12:50.169825] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.998 [2024-07-25 12:12:50.170002] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.998 [2024-07-25 12:12:50.170187] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.998 [2024-07-25 12:12:50.170197] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.998 [2024-07-25 12:12:50.170204] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.998 [2024-07-25 12:12:50.173030] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.998 [2024-07-25 12:12:50.182226] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.998 [2024-07-25 12:12:50.182910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.998 [2024-07-25 12:12:50.182927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.998 [2024-07-25 12:12:50.182934] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.998 [2024-07-25 12:12:50.183116] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.998 [2024-07-25 12:12:50.183294] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.998 [2024-07-25 12:12:50.183303] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.998 [2024-07-25 12:12:50.183310] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.998 [2024-07-25 12:12:50.186141] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.998 [2024-07-25 12:12:50.195333] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.998 [2024-07-25 12:12:50.196103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.998 [2024-07-25 12:12:50.196147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.998 [2024-07-25 12:12:50.196170] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.998 [2024-07-25 12:12:50.196501] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.998 [2024-07-25 12:12:50.196680] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.998 [2024-07-25 12:12:50.196690] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.998 [2024-07-25 12:12:50.196696] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.998 [2024-07-25 12:12:50.199541] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.998 [2024-07-25 12:12:50.208417] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.998 [2024-07-25 12:12:50.209070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.998 [2024-07-25 12:12:50.209089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.998 [2024-07-25 12:12:50.209097] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.998 [2024-07-25 12:12:50.209275] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.998 [2024-07-25 12:12:50.209454] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.998 [2024-07-25 12:12:50.209464] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.998 [2024-07-25 12:12:50.209472] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.998 [2024-07-25 12:12:50.212306] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.998 [2024-07-25 12:12:50.221510] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.998 [2024-07-25 12:12:50.222224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.998 [2024-07-25 12:12:50.222241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.998 [2024-07-25 12:12:50.222249] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.998 [2024-07-25 12:12:50.222426] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.998 [2024-07-25 12:12:50.222604] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.998 [2024-07-25 12:12:50.222614] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.998 [2024-07-25 12:12:50.222621] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.998 [2024-07-25 12:12:50.225472] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:02.998 [2024-07-25 12:12:50.234685] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:02.998 [2024-07-25 12:12:50.235392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:02.998 [2024-07-25 12:12:50.235409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:02.998 [2024-07-25 12:12:50.235417] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:02.998 [2024-07-25 12:12:50.235594] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:02.998 [2024-07-25 12:12:50.235772] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:02.998 [2024-07-25 12:12:50.235782] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:02.998 [2024-07-25 12:12:50.235792] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:02.998 [2024-07-25 12:12:50.238626] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.261 [2024-07-25 12:12:50.247892] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.261 [2024-07-25 12:12:50.248612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.261 [2024-07-25 12:12:50.248629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.261 [2024-07-25 12:12:50.248637] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.261 [2024-07-25 12:12:50.248819] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.261 [2024-07-25 12:12:50.249003] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.261 [2024-07-25 12:12:50.249012] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.261 [2024-07-25 12:12:50.249020] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.261 [2024-07-25 12:12:50.251942] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.261 [2024-07-25 12:12:50.261016] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.261 [2024-07-25 12:12:50.261730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.261 [2024-07-25 12:12:50.261774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.261 [2024-07-25 12:12:50.261795] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.261 [2024-07-25 12:12:50.262096] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.261 [2024-07-25 12:12:50.262274] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.261 [2024-07-25 12:12:50.262284] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.261 [2024-07-25 12:12:50.262291] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.261 [2024-07-25 12:12:50.265198] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.261 [2024-07-25 12:12:50.274075] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.261 [2024-07-25 12:12:50.274848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.261 [2024-07-25 12:12:50.274890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.261 [2024-07-25 12:12:50.274924] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.261 [2024-07-25 12:12:50.275109] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.261 [2024-07-25 12:12:50.275288] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.261 [2024-07-25 12:12:50.275298] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.261 [2024-07-25 12:12:50.275305] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.261 [2024-07-25 12:12:50.278142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.261 [2024-07-25 12:12:50.287194] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.261 [2024-07-25 12:12:50.287769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.261 [2024-07-25 12:12:50.287819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.261 [2024-07-25 12:12:50.287842] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.261 [2024-07-25 12:12:50.288437] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.261 [2024-07-25 12:12:50.288907] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.261 [2024-07-25 12:12:50.288917] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.261 [2024-07-25 12:12:50.288923] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.261 [2024-07-25 12:12:50.291762] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.261 [2024-07-25 12:12:50.300301] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.261 [2024-07-25 12:12:50.301081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.261 [2024-07-25 12:12:50.301125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.261 [2024-07-25 12:12:50.301147] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.261 [2024-07-25 12:12:50.301726] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.261 [2024-07-25 12:12:50.302024] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.261 [2024-07-25 12:12:50.302034] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.261 [2024-07-25 12:12:50.302040] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.261 [2024-07-25 12:12:50.304883] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.261 [2024-07-25 12:12:50.313341] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.261 [2024-07-25 12:12:50.314141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.261 [2024-07-25 12:12:50.314184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.261 [2024-07-25 12:12:50.314205] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.261 [2024-07-25 12:12:50.314785] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.261 [2024-07-25 12:12:50.315079] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.261 [2024-07-25 12:12:50.315089] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.261 [2024-07-25 12:12:50.315096] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.261 [2024-07-25 12:12:50.317913] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.261 [2024-07-25 12:12:50.326415] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.261 [2024-07-25 12:12:50.327110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.261 [2024-07-25 12:12:50.327154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.261 [2024-07-25 12:12:50.327177] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.261 [2024-07-25 12:12:50.327757] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.261 [2024-07-25 12:12:50.328060] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.261 [2024-07-25 12:12:50.328070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.261 [2024-07-25 12:12:50.328077] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.261 [2024-07-25 12:12:50.330863] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.261 [2024-07-25 12:12:50.339449] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.261 [2024-07-25 12:12:50.340173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.261 [2024-07-25 12:12:50.340217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.261 [2024-07-25 12:12:50.340239] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.261 [2024-07-25 12:12:50.340699] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.261 [2024-07-25 12:12:50.340874] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.261 [2024-07-25 12:12:50.340883] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.261 [2024-07-25 12:12:50.340890] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.261 [2024-07-25 12:12:50.343732] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.261 [2024-07-25 12:12:50.352571] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.261 [2024-07-25 12:12:50.353290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.262 [2024-07-25 12:12:50.353334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.262 [2024-07-25 12:12:50.353355] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.262 [2024-07-25 12:12:50.353892] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.262 [2024-07-25 12:12:50.354071] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.262 [2024-07-25 12:12:50.354082] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.262 [2024-07-25 12:12:50.354088] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.262 [2024-07-25 12:12:50.358005] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.262 [2024-07-25 12:12:50.366337] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.262 [2024-07-25 12:12:50.367167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.262 [2024-07-25 12:12:50.367211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.262 [2024-07-25 12:12:50.367232] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.262 [2024-07-25 12:12:50.367811] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.262 [2024-07-25 12:12:50.368057] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.262 [2024-07-25 12:12:50.368067] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.262 [2024-07-25 12:12:50.368091] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.262 [2024-07-25 12:12:50.370906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.262 [2024-07-25 12:12:50.379440] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.262 [2024-07-25 12:12:50.380164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.262 [2024-07-25 12:12:50.380208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.262 [2024-07-25 12:12:50.380229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.262 [2024-07-25 12:12:50.380807] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.262 [2024-07-25 12:12:50.381177] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.262 [2024-07-25 12:12:50.381187] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.262 [2024-07-25 12:12:50.381194] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.262 [2024-07-25 12:12:50.384031] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.262 [2024-07-25 12:12:50.392556] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.262 [2024-07-25 12:12:50.393174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.262 [2024-07-25 12:12:50.393191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.262 [2024-07-25 12:12:50.393199] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.262 [2024-07-25 12:12:50.393376] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.262 [2024-07-25 12:12:50.393554] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.262 [2024-07-25 12:12:50.393564] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.262 [2024-07-25 12:12:50.393571] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.262 [2024-07-25 12:12:50.396404] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.262 [2024-07-25 12:12:50.405614] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.262 [2024-07-25 12:12:50.406319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.262 [2024-07-25 12:12:50.406363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.262 [2024-07-25 12:12:50.406386] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.262 [2024-07-25 12:12:50.406966] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.262 [2024-07-25 12:12:50.407278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.262 [2024-07-25 12:12:50.407288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.262 [2024-07-25 12:12:50.407295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.262 [2024-07-25 12:12:50.410128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.262 [2024-07-25 12:12:50.418819] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.262 [2024-07-25 12:12:50.419450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.262 [2024-07-25 12:12:50.419468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.262 [2024-07-25 12:12:50.419479] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.262 [2024-07-25 12:12:50.419657] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.262 [2024-07-25 12:12:50.419834] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.262 [2024-07-25 12:12:50.419843] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.262 [2024-07-25 12:12:50.419850] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.262 [2024-07-25 12:12:50.422683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.262 [2024-07-25 12:12:50.431880] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.262 [2024-07-25 12:12:50.432551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.262 [2024-07-25 12:12:50.432594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.262 [2024-07-25 12:12:50.432616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.262 [2024-07-25 12:12:50.433328] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.262 [2024-07-25 12:12:50.433509] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.262 [2024-07-25 12:12:50.433518] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.262 [2024-07-25 12:12:50.433525] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.262 [2024-07-25 12:12:50.436359] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.262 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/bdevperf.sh: line 35: 478447 Killed "${NVMF_APP[@]}" "$@" 00:27:03.262 12:12:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@36 -- # tgt_init 00:27:03.262 12:12:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@15 -- # nvmfappstart -m 0xE 00:27:03.262 12:12:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:03.262 12:12:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:03.262 12:12:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:03.262 [2024-07-25 12:12:50.445056] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.262 [2024-07-25 12:12:50.445675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.262 [2024-07-25 12:12:50.445692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.262 [2024-07-25 12:12:50.445699] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.262 [2024-07-25 12:12:50.445876] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.262 [2024-07-25 12:12:50.446059] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.262 [2024-07-25 12:12:50.446070] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.262 [2024-07-25 12:12:50.446076] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.262 [2024-07-25 12:12:50.448906] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.262 12:12:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@481 -- # nvmfpid=479887 00:27:03.262 12:12:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@482 -- # waitforlisten 479887 00:27:03.262 12:12:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xE 00:27:03.262 12:12:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@829 -- # '[' -z 479887 ']' 00:27:03.262 12:12:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:03.262 12:12:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:03.262 12:12:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:03.262 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:03.263 12:12:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:03.263 12:12:50 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:03.263 [2024-07-25 12:12:50.458108] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.263 [2024-07-25 12:12:50.458679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.263 [2024-07-25 12:12:50.458696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.263 [2024-07-25 12:12:50.458703] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.263 [2024-07-25 12:12:50.458880] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.263 [2024-07-25 12:12:50.459065] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.263 [2024-07-25 12:12:50.459075] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.263 [2024-07-25 12:12:50.459082] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.263 [2024-07-25 12:12:50.461911] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.263 [2024-07-25 12:12:50.471281] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.263 [2024-07-25 12:12:50.471853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.263 [2024-07-25 12:12:50.471870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.263 [2024-07-25 12:12:50.471878] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.263 [2024-07-25 12:12:50.472059] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.263 [2024-07-25 12:12:50.472237] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.263 [2024-07-25 12:12:50.472247] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.263 [2024-07-25 12:12:50.472254] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.263 [2024-07-25 12:12:50.475089] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.263 [2024-07-25 12:12:50.484460] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.263 [2024-07-25 12:12:50.485124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.263 [2024-07-25 12:12:50.485140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.263 [2024-07-25 12:12:50.485149] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.263 [2024-07-25 12:12:50.485326] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.263 [2024-07-25 12:12:50.485504] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.263 [2024-07-25 12:12:50.485517] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.263 [2024-07-25 12:12:50.485524] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.263 [2024-07-25 12:12:50.488314] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.263 [2024-07-25 12:12:50.497621] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.263 [2024-07-25 12:12:50.498336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.263 [2024-07-25 12:12:50.498354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.263 [2024-07-25 12:12:50.498362] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.263 [2024-07-25 12:12:50.498540] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.263 [2024-07-25 12:12:50.498539] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:27:03.263 [2024-07-25 12:12:50.498579] [ DPDK EAL parameters: nvmf -c 0xE --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:03.263 [2024-07-25 12:12:50.498725] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.263 [2024-07-25 12:12:50.498735] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.263 [2024-07-25 12:12:50.498742] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.263 [2024-07-25 12:12:50.501583] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.524 [2024-07-25 12:12:50.510785] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.524 [2024-07-25 12:12:50.511432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.524 [2024-07-25 12:12:50.511450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.524 [2024-07-25 12:12:50.511458] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.524 [2024-07-25 12:12:50.511636] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.524 [2024-07-25 12:12:50.511815] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.524 [2024-07-25 12:12:50.511825] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.524 [2024-07-25 12:12:50.511833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.524 [2024-07-25 12:12:50.514664] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.524 [2024-07-25 12:12:50.523865] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.524 [2024-07-25 12:12:50.524539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.524 [2024-07-25 12:12:50.524556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.524 [2024-07-25 12:12:50.524564] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.524 [2024-07-25 12:12:50.524741] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.524 [2024-07-25 12:12:50.524919] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.524 [2024-07-25 12:12:50.524928] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.524 [2024-07-25 12:12:50.524938] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.524 EAL: No free 2048 kB hugepages reported on node 1 00:27:03.524 [2024-07-25 12:12:50.527776] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.524 [2024-07-25 12:12:50.536980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.524 [2024-07-25 12:12:50.537607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.524 [2024-07-25 12:12:50.537624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.524 [2024-07-25 12:12:50.537632] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.524 [2024-07-25 12:12:50.537809] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.524 [2024-07-25 12:12:50.537988] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.524 [2024-07-25 12:12:50.537997] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.524 [2024-07-25 12:12:50.538004] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.524 [2024-07-25 12:12:50.540834] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.524 [2024-07-25 12:12:50.550030] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.524 [2024-07-25 12:12:50.550649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.524 [2024-07-25 12:12:50.550666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.524 [2024-07-25 12:12:50.550673] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.524 [2024-07-25 12:12:50.550850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.524 [2024-07-25 12:12:50.551029] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.524 [2024-07-25 12:12:50.551039] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.524 [2024-07-25 12:12:50.551051] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.524 [2024-07-25 12:12:50.553912] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.524 [2024-07-25 12:12:50.558776] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 3 00:27:03.524 [2024-07-25 12:12:50.563066] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.524 [2024-07-25 12:12:50.563692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.524 [2024-07-25 12:12:50.563710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.524 [2024-07-25 12:12:50.563717] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.524 [2024-07-25 12:12:50.563890] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.524 [2024-07-25 12:12:50.564068] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.524 [2024-07-25 12:12:50.564077] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.524 [2024-07-25 12:12:50.564100] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.524 [2024-07-25 12:12:50.566905] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.524 [2024-07-25 12:12:50.576156] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.524 [2024-07-25 12:12:50.576742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.524 [2024-07-25 12:12:50.576759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.524 [2024-07-25 12:12:50.576767] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.524 [2024-07-25 12:12:50.576944] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.524 [2024-07-25 12:12:50.577127] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.524 [2024-07-25 12:12:50.577137] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.524 [2024-07-25 12:12:50.577144] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.524 [2024-07-25 12:12:50.580062] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.524 [2024-07-25 12:12:50.589185] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.524 [2024-07-25 12:12:50.589778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.524 [2024-07-25 12:12:50.589795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.524 [2024-07-25 12:12:50.589802] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.524 [2024-07-25 12:12:50.589975] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.525 [2024-07-25 12:12:50.590155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.525 [2024-07-25 12:12:50.590166] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.525 [2024-07-25 12:12:50.590172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.525 [2024-07-25 12:12:50.592959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.525 [2024-07-25 12:12:50.602219] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.525 [2024-07-25 12:12:50.602868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.525 [2024-07-25 12:12:50.602886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.525 [2024-07-25 12:12:50.602894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.525 [2024-07-25 12:12:50.603072] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.525 [2024-07-25 12:12:50.603270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.525 [2024-07-25 12:12:50.603280] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.525 [2024-07-25 12:12:50.603289] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.525 [2024-07-25 12:12:50.606142] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.525 [2024-07-25 12:12:50.615360] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.525 [2024-07-25 12:12:50.616069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.525 [2024-07-25 12:12:50.616087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.525 [2024-07-25 12:12:50.616094] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.525 [2024-07-25 12:12:50.616272] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.525 [2024-07-25 12:12:50.616444] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.525 [2024-07-25 12:12:50.616454] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.525 [2024-07-25 12:12:50.616461] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.525 [2024-07-25 12:12:50.619271] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.525 [2024-07-25 12:12:50.628563] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.525 [2024-07-25 12:12:50.629278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.525 [2024-07-25 12:12:50.629296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.525 [2024-07-25 12:12:50.629304] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.525 [2024-07-25 12:12:50.629483] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.525 [2024-07-25 12:12:50.629662] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.525 [2024-07-25 12:12:50.629672] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.525 [2024-07-25 12:12:50.629679] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.525 [2024-07-25 12:12:50.632509] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.525 [2024-07-25 12:12:50.640232] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:03.525 [2024-07-25 12:12:50.640259] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:03.525 [2024-07-25 12:12:50.640266] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:03.525 [2024-07-25 12:12:50.640272] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:03.525 [2024-07-25 12:12:50.640278] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:03.525 [2024-07-25 12:12:50.640317] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:27:03.525 [2024-07-25 12:12:50.640399] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:27:03.525 [2024-07-25 12:12:50.640401] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:03.525 [2024-07-25 12:12:50.641703] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.525 [2024-07-25 12:12:50.642409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.525 [2024-07-25 12:12:50.642428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.525 [2024-07-25 12:12:50.642436] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.525 [2024-07-25 12:12:50.642614] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.525 [2024-07-25 12:12:50.642792] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.525 [2024-07-25 12:12:50.642802] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.525 [2024-07-25 12:12:50.642809] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.525 [2024-07-25 12:12:50.645645] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.525 [2024-07-25 12:12:50.654848] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.525 [2024-07-25 12:12:50.655506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.525 [2024-07-25 12:12:50.655525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.525 [2024-07-25 12:12:50.655533] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.525 [2024-07-25 12:12:50.655711] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.525 [2024-07-25 12:12:50.655890] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.525 [2024-07-25 12:12:50.655898] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.525 [2024-07-25 12:12:50.655906] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.525 [2024-07-25 12:12:50.658740] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.525 [2024-07-25 12:12:50.667933] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.525 [2024-07-25 12:12:50.668662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.525 [2024-07-25 12:12:50.668681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.525 [2024-07-25 12:12:50.668688] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.525 [2024-07-25 12:12:50.668861] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.525 [2024-07-25 12:12:50.669033] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.525 [2024-07-25 12:12:50.669046] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.525 [2024-07-25 12:12:50.669054] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.525 [2024-07-25 12:12:50.671893] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.525 [2024-07-25 12:12:50.681095] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.525 [2024-07-25 12:12:50.681832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.525 [2024-07-25 12:12:50.681850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.525 [2024-07-25 12:12:50.681858] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.525 [2024-07-25 12:12:50.682035] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.525 [2024-07-25 12:12:50.682217] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.525 [2024-07-25 12:12:50.682227] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.525 [2024-07-25 12:12:50.682234] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.525 [2024-07-25 12:12:50.685082] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.525 [2024-07-25 12:12:50.694285] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.525 [2024-07-25 12:12:50.695005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.525 [2024-07-25 12:12:50.695024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.525 [2024-07-25 12:12:50.695033] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.525 [2024-07-25 12:12:50.695222] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.525 [2024-07-25 12:12:50.695399] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.525 [2024-07-25 12:12:50.695408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.525 [2024-07-25 12:12:50.695416] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.525 [2024-07-25 12:12:50.698244] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.525 [2024-07-25 12:12:50.707474] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.525 [2024-07-25 12:12:50.708455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.525 [2024-07-25 12:12:50.708473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.525 [2024-07-25 12:12:50.708481] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.525 [2024-07-25 12:12:50.708658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.525 [2024-07-25 12:12:50.708836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.525 [2024-07-25 12:12:50.708846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.525 [2024-07-25 12:12:50.708853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.526 [2024-07-25 12:12:50.711685] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.526 [2024-07-25 12:12:50.720545] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.526 [2024-07-25 12:12:50.721181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.526 [2024-07-25 12:12:50.721199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.526 [2024-07-25 12:12:50.721207] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.526 [2024-07-25 12:12:50.721385] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.526 [2024-07-25 12:12:50.721564] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.526 [2024-07-25 12:12:50.721573] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.526 [2024-07-25 12:12:50.721580] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.526 [2024-07-25 12:12:50.724410] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.526 [2024-07-25 12:12:50.733602] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.526 [2024-07-25 12:12:50.734317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.526 [2024-07-25 12:12:50.734334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.526 [2024-07-25 12:12:50.734342] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.526 [2024-07-25 12:12:50.734519] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.526 [2024-07-25 12:12:50.734698] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.526 [2024-07-25 12:12:50.734708] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.526 [2024-07-25 12:12:50.734722] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.526 [2024-07-25 12:12:50.737552] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.526 [2024-07-25 12:12:50.746745] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.526 [2024-07-25 12:12:50.747436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.526 [2024-07-25 12:12:50.747453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.526 [2024-07-25 12:12:50.747461] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.526 [2024-07-25 12:12:50.747639] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.526 [2024-07-25 12:12:50.747817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.526 [2024-07-25 12:12:50.747827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.526 [2024-07-25 12:12:50.747833] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.526 [2024-07-25 12:12:50.750666] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.526 [2024-07-25 12:12:50.759852] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.526 [2024-07-25 12:12:50.760582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.526 [2024-07-25 12:12:50.760601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.526 [2024-07-25 12:12:50.760609] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.526 [2024-07-25 12:12:50.760787] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.526 [2024-07-25 12:12:50.760967] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.526 [2024-07-25 12:12:50.760976] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.526 [2024-07-25 12:12:50.760983] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.526 [2024-07-25 12:12:50.763816] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.787 [2024-07-25 12:12:50.773008] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.787 [2024-07-25 12:12:50.773648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.787 [2024-07-25 12:12:50.773665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.787 [2024-07-25 12:12:50.773672] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.787 [2024-07-25 12:12:50.773850] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.787 [2024-07-25 12:12:50.774028] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.787 [2024-07-25 12:12:50.774038] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.787 [2024-07-25 12:12:50.774049] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.787 [2024-07-25 12:12:50.776875] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.787 [2024-07-25 12:12:50.786073] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.787 [2024-07-25 12:12:50.786723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.787 [2024-07-25 12:12:50.786739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.787 [2024-07-25 12:12:50.786746] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.787 [2024-07-25 12:12:50.786923] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.787 [2024-07-25 12:12:50.787107] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.787 [2024-07-25 12:12:50.787117] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.787 [2024-07-25 12:12:50.787124] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.787 [2024-07-25 12:12:50.789950] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.787 [2024-07-25 12:12:50.799141] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.787 [2024-07-25 12:12:50.799763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.787 [2024-07-25 12:12:50.799780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.787 [2024-07-25 12:12:50.799787] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.787 [2024-07-25 12:12:50.799964] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.787 [2024-07-25 12:12:50.800155] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.787 [2024-07-25 12:12:50.800165] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.787 [2024-07-25 12:12:50.800172] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.787 [2024-07-25 12:12:50.802998] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.787 [2024-07-25 12:12:50.812189] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.787 [2024-07-25 12:12:50.812888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.787 [2024-07-25 12:12:50.812904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.787 [2024-07-25 12:12:50.812912] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.787 [2024-07-25 12:12:50.813093] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.787 [2024-07-25 12:12:50.813272] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.787 [2024-07-25 12:12:50.813282] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.787 [2024-07-25 12:12:50.813288] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.787 [2024-07-25 12:12:50.816120] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.787 [2024-07-25 12:12:50.825308] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.787 [2024-07-25 12:12:50.825947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.787 [2024-07-25 12:12:50.825964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.787 [2024-07-25 12:12:50.825971] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.787 [2024-07-25 12:12:50.826153] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.787 [2024-07-25 12:12:50.826334] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.787 [2024-07-25 12:12:50.826343] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.787 [2024-07-25 12:12:50.826350] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.788 [2024-07-25 12:12:50.829180] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.788 [2024-07-25 12:12:50.838367] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.788 [2024-07-25 12:12:50.839013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.788 [2024-07-25 12:12:50.839030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.788 [2024-07-25 12:12:50.839037] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.788 [2024-07-25 12:12:50.839220] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.788 [2024-07-25 12:12:50.839398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.788 [2024-07-25 12:12:50.839407] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.788 [2024-07-25 12:12:50.839414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.788 [2024-07-25 12:12:50.842242] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.788 [2024-07-25 12:12:50.851427] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.788 [2024-07-25 12:12:50.852137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.788 [2024-07-25 12:12:50.852154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.788 [2024-07-25 12:12:50.852162] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.788 [2024-07-25 12:12:50.852339] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.788 [2024-07-25 12:12:50.852516] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.788 [2024-07-25 12:12:50.852526] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.788 [2024-07-25 12:12:50.852533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.788 [2024-07-25 12:12:50.855362] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.788 [2024-07-25 12:12:50.864546] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.788 [2024-07-25 12:12:50.865239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.788 [2024-07-25 12:12:50.865256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.788 [2024-07-25 12:12:50.865264] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.788 [2024-07-25 12:12:50.865442] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.788 [2024-07-25 12:12:50.865621] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.788 [2024-07-25 12:12:50.865630] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.788 [2024-07-25 12:12:50.865637] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.788 [2024-07-25 12:12:50.868471] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.788 [2024-07-25 12:12:50.877661] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.788 [2024-07-25 12:12:50.878307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.788 [2024-07-25 12:12:50.878324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.788 [2024-07-25 12:12:50.878331] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.788 [2024-07-25 12:12:50.878508] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.788 [2024-07-25 12:12:50.878685] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.788 [2024-07-25 12:12:50.878695] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.788 [2024-07-25 12:12:50.878702] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.788 [2024-07-25 12:12:50.881534] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.788 [2024-07-25 12:12:50.890726] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.788 [2024-07-25 12:12:50.891429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.788 [2024-07-25 12:12:50.891446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.788 [2024-07-25 12:12:50.891454] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.788 [2024-07-25 12:12:50.891632] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.788 [2024-07-25 12:12:50.891809] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.788 [2024-07-25 12:12:50.891818] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.788 [2024-07-25 12:12:50.891825] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.788 [2024-07-25 12:12:50.894653] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.788 [2024-07-25 12:12:50.903845] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.788 [2024-07-25 12:12:50.904456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.788 [2024-07-25 12:12:50.904473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.788 [2024-07-25 12:12:50.904480] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.788 [2024-07-25 12:12:50.904658] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.788 [2024-07-25 12:12:50.904836] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.788 [2024-07-25 12:12:50.904846] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.788 [2024-07-25 12:12:50.904853] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.788 [2024-07-25 12:12:50.907683] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.788 [2024-07-25 12:12:50.917036] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.788 [2024-07-25 12:12:50.917730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.788 [2024-07-25 12:12:50.917747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.788 [2024-07-25 12:12:50.917757] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.788 [2024-07-25 12:12:50.917934] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.788 [2024-07-25 12:12:50.918117] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.788 [2024-07-25 12:12:50.918128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.788 [2024-07-25 12:12:50.918135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.788 [2024-07-25 12:12:50.920959] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.788 [2024-07-25 12:12:50.930144] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.788 [2024-07-25 12:12:50.930858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.788 [2024-07-25 12:12:50.930875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.788 [2024-07-25 12:12:50.930882] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.788 [2024-07-25 12:12:50.931064] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.788 [2024-07-25 12:12:50.931242] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.788 [2024-07-25 12:12:50.931251] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.788 [2024-07-25 12:12:50.931258] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.788 [2024-07-25 12:12:50.934088] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.788 [2024-07-25 12:12:50.943273] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.788 [2024-07-25 12:12:50.943733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.788 [2024-07-25 12:12:50.943750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.788 [2024-07-25 12:12:50.943758] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.788 [2024-07-25 12:12:50.943935] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.788 [2024-07-25 12:12:50.944118] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.788 [2024-07-25 12:12:50.944128] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.788 [2024-07-25 12:12:50.944135] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.788 [2024-07-25 12:12:50.946962] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.788 [2024-07-25 12:12:50.956320] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.788 [2024-07-25 12:12:50.957031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.788 [2024-07-25 12:12:50.957051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.788 [2024-07-25 12:12:50.957059] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.788 [2024-07-25 12:12:50.957236] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.788 [2024-07-25 12:12:50.957415] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.789 [2024-07-25 12:12:50.957428] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.789 [2024-07-25 12:12:50.957435] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.789 [2024-07-25 12:12:50.960267] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.789 [2024-07-25 12:12:50.969483] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.789 [2024-07-25 12:12:50.969997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.789 [2024-07-25 12:12:50.970013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.789 [2024-07-25 12:12:50.970021] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.789 [2024-07-25 12:12:50.970218] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.789 [2024-07-25 12:12:50.970398] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.789 [2024-07-25 12:12:50.970408] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.789 [2024-07-25 12:12:50.970414] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.789 [2024-07-25 12:12:50.973243] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.789 [2024-07-25 12:12:50.982599] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.789 [2024-07-25 12:12:50.983321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.789 [2024-07-25 12:12:50.983338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.789 [2024-07-25 12:12:50.983346] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.789 [2024-07-25 12:12:50.983523] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.789 [2024-07-25 12:12:50.983702] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.789 [2024-07-25 12:12:50.983712] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.789 [2024-07-25 12:12:50.983718] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.789 [2024-07-25 12:12:50.986553] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.789 [2024-07-25 12:12:50.995708] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.789 [2024-07-25 12:12:50.996339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.789 [2024-07-25 12:12:50.996356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.789 [2024-07-25 12:12:50.996363] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.789 [2024-07-25 12:12:50.996541] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.789 [2024-07-25 12:12:50.996720] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.789 [2024-07-25 12:12:50.996729] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.789 [2024-07-25 12:12:50.996736] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.789 [2024-07-25 12:12:50.999567] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.789 [2024-07-25 12:12:51.008764] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.789 [2024-07-25 12:12:51.009448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.789 [2024-07-25 12:12:51.009465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.789 [2024-07-25 12:12:51.009472] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.789 [2024-07-25 12:12:51.009644] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.789 [2024-07-25 12:12:51.009817] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.789 [2024-07-25 12:12:51.009827] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.789 [2024-07-25 12:12:51.009834] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.789 [2024-07-25 12:12:51.012673] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.789 [2024-07-25 12:12:51.021827] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.789 [2024-07-25 12:12:51.022530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.789 [2024-07-25 12:12:51.022546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.789 [2024-07-25 12:12:51.022553] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.789 [2024-07-25 12:12:51.022730] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.789 [2024-07-25 12:12:51.022909] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:03.789 [2024-07-25 12:12:51.022919] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:03.789 [2024-07-25 12:12:51.022925] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:03.789 [2024-07-25 12:12:51.025757] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:03.789 [2024-07-25 12:12:51.034947] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:03.789 [2024-07-25 12:12:51.035590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:03.789 [2024-07-25 12:12:51.035608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:03.789 [2024-07-25 12:12:51.035616] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:03.789 [2024-07-25 12:12:51.035794] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:03.789 [2024-07-25 12:12:51.035973] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.050 [2024-07-25 12:12:51.035983] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.050 [2024-07-25 12:12:51.035994] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.050 [2024-07-25 12:12:51.038826] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.050 [2024-07-25 12:12:51.048018] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.050 [2024-07-25 12:12:51.048640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-07-25 12:12:51.048656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.050 [2024-07-25 12:12:51.048665] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.050 [2024-07-25 12:12:51.048846] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.050 [2024-07-25 12:12:51.049026] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.050 [2024-07-25 12:12:51.049036] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.050 [2024-07-25 12:12:51.049047] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.050 [2024-07-25 12:12:51.051872] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.050 [2024-07-25 12:12:51.061064] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.050 [2024-07-25 12:12:51.061776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.050 [2024-07-25 12:12:51.061793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.050 [2024-07-25 12:12:51.061801] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.050 [2024-07-25 12:12:51.061979] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.050 [2024-07-25 12:12:51.062161] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.050 [2024-07-25 12:12:51.062171] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.050 [2024-07-25 12:12:51.062178] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.050 [2024-07-25 12:12:51.065008] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.050 [2024-07-25 12:12:51.074197] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.050 [2024-07-25 12:12:51.074883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-07-25 12:12:51.074899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.051 [2024-07-25 12:12:51.074908] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.051 [2024-07-25 12:12:51.075091] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.051 [2024-07-25 12:12:51.075270] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.051 [2024-07-25 12:12:51.075279] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.051 [2024-07-25 12:12:51.075287] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.051 [2024-07-25 12:12:51.078117] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.051 [2024-07-25 12:12:51.087313] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.051 [2024-07-25 12:12:51.087746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-07-25 12:12:51.087762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.051 [2024-07-25 12:12:51.087770] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.051 [2024-07-25 12:12:51.087947] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.051 [2024-07-25 12:12:51.088132] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.051 [2024-07-25 12:12:51.088142] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.051 [2024-07-25 12:12:51.088152] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.051 [2024-07-25 12:12:51.090978] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.051 [2024-07-25 12:12:51.100505] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.051 [2024-07-25 12:12:51.101223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-07-25 12:12:51.101239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.051 [2024-07-25 12:12:51.101247] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.051 [2024-07-25 12:12:51.101420] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.051 [2024-07-25 12:12:51.101600] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.051 [2024-07-25 12:12:51.101610] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.051 [2024-07-25 12:12:51.101617] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.051 [2024-07-25 12:12:51.104466] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.051 [2024-07-25 12:12:51.113666] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.051 [2024-07-25 12:12:51.114099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-07-25 12:12:51.114116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.051 [2024-07-25 12:12:51.114124] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.051 [2024-07-25 12:12:51.114301] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.051 [2024-07-25 12:12:51.114480] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.051 [2024-07-25 12:12:51.114490] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.051 [2024-07-25 12:12:51.114496] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.051 [2024-07-25 12:12:51.117328] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.051 [2024-07-25 12:12:51.126851] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.051 [2024-07-25 12:12:51.127562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-07-25 12:12:51.127579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.051 [2024-07-25 12:12:51.127587] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.051 [2024-07-25 12:12:51.127764] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.051 [2024-07-25 12:12:51.127942] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.051 [2024-07-25 12:12:51.127952] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.051 [2024-07-25 12:12:51.127958] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.051 [2024-07-25 12:12:51.130788] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.051 [2024-07-25 12:12:51.139978] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.051 [2024-07-25 12:12:51.140695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-07-25 12:12:51.140711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.051 [2024-07-25 12:12:51.140718] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.051 [2024-07-25 12:12:51.140895] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.051 [2024-07-25 12:12:51.141077] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.051 [2024-07-25 12:12:51.141088] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.051 [2024-07-25 12:12:51.141094] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.051 [2024-07-25 12:12:51.143920] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.051 [2024-07-25 12:12:51.153312] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.051 [2024-07-25 12:12:51.153889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-07-25 12:12:51.153907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.051 [2024-07-25 12:12:51.153916] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.051 [2024-07-25 12:12:51.154099] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.051 [2024-07-25 12:12:51.154278] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.051 [2024-07-25 12:12:51.154288] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.051 [2024-07-25 12:12:51.154295] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.051 [2024-07-25 12:12:51.157128] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.051 [2024-07-25 12:12:51.166484] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.051 [2024-07-25 12:12:51.167119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-07-25 12:12:51.167136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.051 [2024-07-25 12:12:51.167144] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.051 [2024-07-25 12:12:51.167322] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.051 [2024-07-25 12:12:51.167501] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.051 [2024-07-25 12:12:51.167511] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.051 [2024-07-25 12:12:51.167518] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.051 [2024-07-25 12:12:51.170349] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.051 [2024-07-25 12:12:51.179567] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.051 [2024-07-25 12:12:51.180212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-07-25 12:12:51.180229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.051 [2024-07-25 12:12:51.180237] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.051 [2024-07-25 12:12:51.180415] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.051 [2024-07-25 12:12:51.180598] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.051 [2024-07-25 12:12:51.180607] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.051 [2024-07-25 12:12:51.180614] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.051 [2024-07-25 12:12:51.183445] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.051 [2024-07-25 12:12:51.192641] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.051 [2024-07-25 12:12:51.193278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.051 [2024-07-25 12:12:51.193295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.051 [2024-07-25 12:12:51.193302] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.051 [2024-07-25 12:12:51.193475] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.051 [2024-07-25 12:12:51.193649] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.051 [2024-07-25 12:12:51.193658] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.051 [2024-07-25 12:12:51.193665] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.051 [2024-07-25 12:12:51.196499] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.051 [2024-07-25 12:12:51.205694] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.052 [2024-07-25 12:12:51.206384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-07-25 12:12:51.206400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.052 [2024-07-25 12:12:51.206408] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.052 [2024-07-25 12:12:51.206580] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.052 [2024-07-25 12:12:51.206753] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.052 [2024-07-25 12:12:51.206763] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.052 [2024-07-25 12:12:51.206769] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.052 [2024-07-25 12:12:51.209607] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.052 [2024-07-25 12:12:51.218795] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.052 [2024-07-25 12:12:51.219409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-07-25 12:12:51.219426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.052 [2024-07-25 12:12:51.219434] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.052 [2024-07-25 12:12:51.219612] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.052 [2024-07-25 12:12:51.219791] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.052 [2024-07-25 12:12:51.219800] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.052 [2024-07-25 12:12:51.219807] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.052 [2024-07-25 12:12:51.222638] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.052 [2024-07-25 12:12:51.231992] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.052 [2024-07-25 12:12:51.232666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-07-25 12:12:51.232683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.052 [2024-07-25 12:12:51.232691] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.052 [2024-07-25 12:12:51.232868] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.052 [2024-07-25 12:12:51.233054] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.052 [2024-07-25 12:12:51.233064] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.052 [2024-07-25 12:12:51.233071] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.052 [2024-07-25 12:12:51.235897] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.052 [2024-07-25 12:12:51.245089] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.052 [2024-07-25 12:12:51.245798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-07-25 12:12:51.245815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.052 [2024-07-25 12:12:51.245824] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.052 [2024-07-25 12:12:51.246000] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.052 [2024-07-25 12:12:51.246184] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.052 [2024-07-25 12:12:51.246194] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.052 [2024-07-25 12:12:51.246201] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.052 [2024-07-25 12:12:51.249024] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.052 [2024-07-25 12:12:51.258217] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.052 [2024-07-25 12:12:51.258912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-07-25 12:12:51.258929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.052 [2024-07-25 12:12:51.258937] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.052 [2024-07-25 12:12:51.259118] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.052 [2024-07-25 12:12:51.259297] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.052 [2024-07-25 12:12:51.259306] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.052 [2024-07-25 12:12:51.259313] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.052 [2024-07-25 12:12:51.262140] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.052 [2024-07-25 12:12:51.271325] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.052 [2024-07-25 12:12:51.272018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-07-25 12:12:51.272035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.052 [2024-07-25 12:12:51.272051] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.052 [2024-07-25 12:12:51.272228] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.052 [2024-07-25 12:12:51.272406] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.052 [2024-07-25 12:12:51.272416] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.052 [2024-07-25 12:12:51.272423] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.052 [2024-07-25 12:12:51.275256] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.052 [2024-07-25 12:12:51.284453] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.052 [2024-07-25 12:12:51.285163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-07-25 12:12:51.285180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.052 [2024-07-25 12:12:51.285188] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.052 [2024-07-25 12:12:51.285367] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.052 [2024-07-25 12:12:51.285543] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.052 [2024-07-25 12:12:51.285553] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.052 [2024-07-25 12:12:51.285560] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.052 [2024-07-25 12:12:51.288395] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.052 [2024-07-25 12:12:51.297584] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.052 [2024-07-25 12:12:51.298204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.052 [2024-07-25 12:12:51.298221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.052 [2024-07-25 12:12:51.298229] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.052 [2024-07-25 12:12:51.298405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.052 [2024-07-25 12:12:51.298585] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.052 [2024-07-25 12:12:51.298594] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.052 [2024-07-25 12:12:51.298601] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.313 [2024-07-25 12:12:51.301432] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.313 [2024-07-25 12:12:51.310626] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.313 [2024-07-25 12:12:51.311251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.313 [2024-07-25 12:12:51.311269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.313 [2024-07-25 12:12:51.311277] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.313 [2024-07-25 12:12:51.311454] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.313 [2024-07-25 12:12:51.311632] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.313 [2024-07-25 12:12:51.311646] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.313 [2024-07-25 12:12:51.311653] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.313 [2024-07-25 12:12:51.314487] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.313 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:04.313 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@862 -- # return 0 00:27:04.313 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:04.313 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:04.313 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:04.313 [2024-07-25 12:12:51.323693] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.313 [2024-07-25 12:12:51.324203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.313 [2024-07-25 12:12:51.324220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.313 [2024-07-25 12:12:51.324227] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.313 [2024-07-25 12:12:51.324405] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.313 [2024-07-25 12:12:51.324584] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.313 [2024-07-25 12:12:51.324595] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.313 [2024-07-25 12:12:51.324602] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.313 [2024-07-25 12:12:51.327435] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.313 [2024-07-25 12:12:51.336798] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.313 [2024-07-25 12:12:51.337499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.313 [2024-07-25 12:12:51.337517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.313 [2024-07-25 12:12:51.337525] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.313 [2024-07-25 12:12:51.337703] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.313 [2024-07-25 12:12:51.337882] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.313 [2024-07-25 12:12:51.337893] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.313 [2024-07-25 12:12:51.337900] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.313 [2024-07-25 12:12:51.340733] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.313 [2024-07-25 12:12:51.349929] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.313 [2024-07-25 12:12:51.350555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.313 [2024-07-25 12:12:51.350571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.313 [2024-07-25 12:12:51.350579] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.313 [2024-07-25 12:12:51.350756] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.313 [2024-07-25 12:12:51.350935] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.313 [2024-07-25 12:12:51.350948] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.313 [2024-07-25 12:12:51.350956] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.313 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:04.313 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@17 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:27:04.313 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.313 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:04.313 [2024-07-25 12:12:51.353787] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.313 [2024-07-25 12:12:51.357010] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:04.313 [2024-07-25 12:12:51.362980] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.313 [2024-07-25 12:12:51.363597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.313 [2024-07-25 12:12:51.363614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.313 [2024-07-25 12:12:51.363622] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.313 [2024-07-25 12:12:51.363799] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.313 [2024-07-25 12:12:51.363976] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.313 [2024-07-25 12:12:51.363986] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.313 [2024-07-25 12:12:51.363992] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.313 [2024-07-25 12:12:51.366823] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.313 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.313 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@18 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:04.313 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.313 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:04.313 [2024-07-25 12:12:51.376180] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.313 [2024-07-25 12:12:51.376869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.313 [2024-07-25 12:12:51.376886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.313 [2024-07-25 12:12:51.376894] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.313 [2024-07-25 12:12:51.377076] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.313 [2024-07-25 12:12:51.377256] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.313 [2024-07-25 12:12:51.377266] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.313 [2024-07-25 12:12:51.377272] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.313 [2024-07-25 12:12:51.380107] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.313 [2024-07-25 12:12:51.389338] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.313 [2024-07-25 12:12:51.390062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.313 [2024-07-25 12:12:51.390089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.314 [2024-07-25 12:12:51.390098] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.314 [2024-07-25 12:12:51.390276] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.314 [2024-07-25 12:12:51.390456] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.314 [2024-07-25 12:12:51.390466] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.314 [2024-07-25 12:12:51.390473] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.314 [2024-07-25 12:12:51.393347] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.314 Malloc0 00:27:04.314 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.314 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@19 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:04.314 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.314 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:04.314 [2024-07-25 12:12:51.402543] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.314 [2024-07-25 12:12:51.403246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.314 [2024-07-25 12:12:51.403263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.314 [2024-07-25 12:12:51.403272] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.314 [2024-07-25 12:12:51.403449] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.314 [2024-07-25 12:12:51.403628] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.314 [2024-07-25 12:12:51.403638] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.314 [2024-07-25 12:12:51.403645] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.314 [2024-07-25 12:12:51.406477] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.314 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.314 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@20 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:04.314 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.314 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:04.314 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.314 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@21 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:04.314 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:04.314 [2024-07-25 12:12:51.415665] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.314 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:04.314 [2024-07-25 12:12:51.416413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:04.314 [2024-07-25 12:12:51.416431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xf33980 with addr=10.0.0.2, port=4420 00:27:04.314 [2024-07-25 12:12:51.416439] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0xf33980 is same with the state(5) to be set 00:27:04.314 [2024-07-25 12:12:51.416617] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xf33980 (9): Bad file descriptor 00:27:04.314 [2024-07-25 12:12:51.416800] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Ctrlr is in error state 00:27:04.314 [2024-07-25 12:12:51.416809] nvme_ctrlr.c:1818:spdk_nvme_ctrlr_reconnect_poll_async: *ERROR*: [nqn.2016-06.io.spdk:cnode1] controller reinitialization failed 00:27:04.314 [2024-07-25 12:12:51.416816] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode1] in failed state. 00:27:04.314 [2024-07-25 12:12:51.418395] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:04.314 [2024-07-25 12:12:51.419644] bdev_nvme.c:2065:_bdev_nvme_reset_ctrlr_complete: *ERROR*: Resetting controller failed. 00:27:04.314 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:04.314 12:12:51 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@38 -- # wait 478833 00:27:04.314 [2024-07-25 12:12:51.428828] nvme_ctrlr.c:1720:nvme_ctrlr_disconnect: *NOTICE*: [nqn.2016-06.io.spdk:cnode1] resetting controller 00:27:04.314 [2024-07-25 12:12:51.549760] bdev_nvme.c:2067:_bdev_nvme_reset_ctrlr_complete: *NOTICE*: Resetting controller successful. 00:27:14.303 00:27:14.303 Latency(us) 00:27:14.303 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:27:14.303 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:27:14.303 Verification LBA range: start 0x0 length 0x4000 00:27:14.303 Nvme1n1 : 15.01 8121.50 31.72 12410.69 0.00 6213.77 1296.47 23592.96 00:27:14.303 =================================================================================================================== 00:27:14.303 Total : 8121.50 31.72 12410.69 0.00 6213.77 1296.47 23592.96 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@39 -- # sync 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@40 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@42 -- # trap - SIGINT SIGTERM EXIT 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- host/bdevperf.sh@44 -- # nvmftestfini 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@117 -- # sync 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@120 -- # set +e 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:14.303 rmmod nvme_tcp 00:27:14.303 rmmod nvme_fabrics 00:27:14.303 rmmod nvme_keyring 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@124 -- # set -e 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@125 -- # return 0 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@489 -- # '[' -n 479887 ']' 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@490 -- # killprocess 479887 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@948 -- # '[' -z 479887 ']' 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@952 -- # kill -0 479887 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # uname 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 479887 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@966 -- # echo 'killing process with pid 479887' 00:27:14.303 killing process with pid 479887 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@967 -- # kill 479887 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@972 -- # wait 479887 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:14.303 12:13:00 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.243 12:13:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:15.243 00:27:15.243 real 0m25.597s 00:27:15.243 user 1m2.295s 00:27:15.243 sys 0m5.779s 00:27:15.243 12:13:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:15.243 12:13:02 nvmf_tcp.nvmf_host.nvmf_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:27:15.243 ************************************ 00:27:15.243 END TEST nvmf_bdevperf 00:27:15.243 ************************************ 00:27:15.243 12:13:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:27:15.243 12:13:02 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@48 -- # run_test nvmf_target_disconnect /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:15.243 12:13:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:15.243 12:13:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:15.243 12:13:02 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:15.243 ************************************ 00:27:15.243 START TEST nvmf_target_disconnect 00:27:15.243 ************************************ 00:27:15.243 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh --transport=tcp 00:27:15.503 * Looking for test storage... 00:27:15.503 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host 00:27:15.503 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:15.503 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # uname -s 00:27:15.503 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:15.503 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:15.503 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:15.503 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:15.503 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:15.503 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:15.503 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:15.503 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:15.503 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:15.503 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:15.503 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:15.503 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:15.503 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@5 -- # export PATH 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@47 -- # : 0 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@11 -- # PLUGIN_DIR=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/app/fio/nvme 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@13 -- # MALLOC_BDEV_SIZE=64 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@14 -- # MALLOC_BLOCK_SIZE=512 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@69 -- # nvmftestinit 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@285 -- # xtrace_disable 00:27:15.504 12:13:02 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # pci_devs=() 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # net_devs=() 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # e810=() 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@296 -- # local -ga e810 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # x722=() 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@297 -- # local -ga x722 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # mlx=() 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@298 -- # local -ga mlx 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:20.792 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:20.792 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:20.792 Found net devices under 0000:86:00.0: cvl_0_0 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:20.792 Found net devices under 0000:86:00.1: cvl_0_1 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@414 -- # is_hw=yes 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:20.792 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:20.793 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:20.793 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.170 ms 00:27:20.793 00:27:20.793 --- 10.0.0.2 ping statistics --- 00:27:20.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.793 rtt min/avg/max/mdev = 0.170/0.170/0.170/0.000 ms 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:20.793 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:20.793 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.380 ms 00:27:20.793 00:27:20.793 --- 10.0.0.1 ping statistics --- 00:27:20.793 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:20.793 rtt min/avg/max/mdev = 0.380/0.380/0.380/0.000 ms 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@422 -- # return 0 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@70 -- # run_test nvmf_target_disconnect_tc1 nvmf_target_disconnect_tc1 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:20.793 ************************************ 00:27:20.793 START TEST nvmf_target_disconnect_tc1 00:27:20.793 ************************************ 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc1 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- host/target_disconnect.sh@32 -- # NOT /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@648 -- # local es=0 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@650 -- # valid_exec_arg /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@636 -- # local arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # type -t /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # type -P /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # arg=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@642 -- # [[ -x /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect ]] 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:20.793 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.793 [2024-07-25 12:13:07.747190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:20.793 [2024-07-25 12:13:07.747231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0xa5ee60 with addr=10.0.0.2, port=4420 00:27:20.793 [2024-07-25 12:13:07.747253] nvme_tcp.c:2711:nvme_tcp_ctrlr_construct: *ERROR*: failed to create admin qpair 00:27:20.793 [2024-07-25 12:13:07.747263] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:27:20.793 [2024-07-25 12:13:07.747270] nvme.c: 913:spdk_nvme_probe: *ERROR*: Create probe context failed 00:27:20.793 spdk_nvme_probe() failed for transport address '10.0.0.2' 00:27:20.793 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect: errors occurred 00:27:20.793 Initializing NVMe Controllers 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@651 -- # es=1 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:27:20.793 00:27:20.793 real 0m0.099s 00:27:20.793 user 0m0.039s 00:27:20.793 sys 0m0.059s 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc1 -- common/autotest_common.sh@10 -- # set +x 00:27:20.793 ************************************ 00:27:20.793 END TEST nvmf_target_disconnect_tc1 00:27:20.793 ************************************ 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@71 -- # run_test nvmf_target_disconnect_tc2 nvmf_target_disconnect_tc2 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:20.793 ************************************ 00:27:20.793 START TEST nvmf_target_disconnect_tc2 00:27:20.793 ************************************ 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1123 -- # nvmf_target_disconnect_tc2 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@37 -- # disconnect_init 10.0.0.2 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=484910 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 484910 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 484910 ']' 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:20.793 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:20.794 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:20.794 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:20.794 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:20.794 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:20.794 12:13:07 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:20.794 [2024-07-25 12:13:07.880416] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:27:20.794 [2024-07-25 12:13:07.880457] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:20.794 EAL: No free 2048 kB hugepages reported on node 1 00:27:20.794 [2024-07-25 12:13:07.950986] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:20.794 [2024-07-25 12:13:08.029968] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:20.794 [2024-07-25 12:13:08.030003] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:20.794 [2024-07-25 12:13:08.030009] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:20.794 [2024-07-25 12:13:08.030015] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:20.794 [2024-07-25 12:13:08.030020] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:20.794 [2024-07-25 12:13:08.030132] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:20.794 [2024-07-25 12:13:08.030247] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:20.794 [2024-07-25 12:13:08.030353] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:20.794 [2024-07-25 12:13:08.030354] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.732 Malloc0 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.732 [2024-07-25 12:13:08.739233] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.732 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.733 [2024-07-25 12:13:08.764269] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:21.733 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.733 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:21.733 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:21.733 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:21.733 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:21.733 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@42 -- # reconnectpid=484941 00:27:21.733 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@44 -- # sleep 2 00:27:21.733 12:13:08 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@40 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/reconnect -q 32 -o 4096 -w randrw -M 50 -t 10 -c 0xF -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:27:21.733 EAL: No free 2048 kB hugepages reported on node 1 00:27:23.639 12:13:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@45 -- # kill -9 484910 00:27:23.639 12:13:10 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@47 -- # sleep 2 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 [2024-07-25 12:13:10.790219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 [2024-07-25 12:13:10.790422] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Write completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.639 starting I/O failed 00:27:23.639 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Write completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Write completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Write completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Write completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Write completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Write completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Write completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Write completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Write completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 [2024-07-25 12:13:10.790617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Write completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Write completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Write completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Write completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Write completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Write completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Write completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Write completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Read completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 Write completed with error (sct=0, sc=8) 00:27:23.640 starting I/O failed 00:27:23.640 [2024-07-25 12:13:10.790810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:23.640 [2024-07-25 12:13:10.791268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.791285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9144000b90 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.791534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.791544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9144000b90 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.791961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.791971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9144000b90 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.792407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.792437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9144000b90 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.792687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.792728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9144000b90 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.793041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.793055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9144000b90 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.793342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.793372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9144000b90 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.793844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.793873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9144000b90 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.794339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.794370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9144000b90 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.794909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.794938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9144000b90 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.795457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.795488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9144000b90 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.795941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.795971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9144000b90 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.796490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.796520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9144000b90 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.796992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.797040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.797534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.797566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.798105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.798116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.798487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.798497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.798827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.798855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.799406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.799437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.799889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.799899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.800151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.800165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.800692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.800722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.801061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.801091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.801592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.801606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.802024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.802037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.802516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.802547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.803062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.803092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.803592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.803622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.804024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.804071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.804522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.804535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.640 [2024-07-25 12:13:10.804961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.640 [2024-07-25 12:13:10.804973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.640 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.805475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.805489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.805917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.805930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.806359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.806390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.806851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.806880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.807330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.807360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.807840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.807870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.808379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.808409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.808795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.808824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.809366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.809397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.809864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.809899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.810387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.810416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.810889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.810919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.811400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.811430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.811904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.811933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.812405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.812435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.812973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.813002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.813435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.813466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.813933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.813962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.814294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.814324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.814869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.814899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.815302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.815332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.815872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.815901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.816446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.816476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.816932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.816962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.817400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.817430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.817944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.817973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.818426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.818440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.818724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.818754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.819215] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.819246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.819745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.819758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.641 [2024-07-25 12:13:10.820130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.641 [2024-07-25 12:13:10.820144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.641 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.820504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.820534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.820991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.821021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.821503] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.821533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.822054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.822085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.822469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.822499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.822967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.823001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.823464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.823495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.824005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.824035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.824594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.824625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.825087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.825117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.825583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.825596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.825948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.825962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.826464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.826478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.826933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.826963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.827426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.827455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.827914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.827943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.828454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.828485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.828945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.828974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.829462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.829501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.829982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.829996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.830450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.830480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.831020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.831056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.831470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.831500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.832035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.832073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.832616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.832645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.833104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.833134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.833430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.833459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.833995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.834024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.834447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.834477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.834941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.834970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.835460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.835490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.836004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.836034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.836593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.642 [2024-07-25 12:13:10.836623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.642 qpair failed and we were unable to recover it. 00:27:23.642 [2024-07-25 12:13:10.837184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.837215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.837730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.837759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.838289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.838318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.838829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.838858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.839371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.839401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.839873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.839902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.840460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.840490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.841015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.841056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.841311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.841340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.841805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.841837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.842259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.842290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.842832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.842861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.843394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.843425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.843894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.843924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.844384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.844415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.844901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.844932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.845388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.845418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.845905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.845935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.846472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.846502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.847036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.847074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.847624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.847654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.847891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.847921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.848605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.848627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.849017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.849032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.849471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.849486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.849906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.849920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.850336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.850350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.850857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.850871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.851098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.851112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.851613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.851627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.852076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.852090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.852551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.852564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.853063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.853077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.853444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.853458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.643 [2024-07-25 12:13:10.853985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.643 [2024-07-25 12:13:10.854015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.643 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.854566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.854597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.855117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.855148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.855551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.855565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.856087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.856100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.856629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.856643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.857072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.857089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.857514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.857528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.857909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.857923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.858302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.858332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.858738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.858767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.859251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.859281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.859765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.859795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.860261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.860291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.860754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.860784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.861288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.861302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.861659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.861673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.862098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.862128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.862666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.862695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.863092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.863122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.863593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.863623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.864187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.864235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.864740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.864769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.865283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.865314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.865718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.865748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.866204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.866235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.866753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.866783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.867231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.867261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.867657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.644 [2024-07-25 12:13:10.867686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.644 qpair failed and we were unable to recover it. 00:27:23.644 [2024-07-25 12:13:10.868144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.868174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.868658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.868688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.869142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.869173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.869686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.869716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.870200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.870236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.870704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.870733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.871243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.871273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.871745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.871775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.872176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.872206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.872665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.872694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.873167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.873198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.873592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.873621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.874021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.874060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.874541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.874571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.875030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.875070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.875534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.875564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.875963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.875993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.876463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.876493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.877065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.877095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.877661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.877690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.878224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.878254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.878768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.878782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.879279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.879293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.879800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.879829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.880364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.880378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.880849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.880862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.881280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.881294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.881707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.881736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.882191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.882222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.882619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.882632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.883151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.883181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.883719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.883754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.884213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.884243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.884733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.884762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.885235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.885249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.645 [2024-07-25 12:13:10.885474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.645 [2024-07-25 12:13:10.885503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.645 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.885952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.885983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.886444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.886459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.886829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.886843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.887282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.887296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.887802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.887832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.888241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.888271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.888786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.888815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.889164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.889195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.889708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.889749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.890229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.890243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.890744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.890758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.891256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.891270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.891698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.891727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.892075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.892106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.892566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.892596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.893117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.893131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.893639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.893668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.894200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.894214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.894527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.894557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.895010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.895040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.895561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.895590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.896073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.896105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.914 [2024-07-25 12:13:10.896648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.914 [2024-07-25 12:13:10.896677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.914 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.897144] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.897174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.897635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.897648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.898036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.898077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.898592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.898621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.899084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.899116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.899566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.899595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.900059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.900089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.900611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.900641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.901176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.901207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.901688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.901717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.902231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.902262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.902752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.902782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.903319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.903349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.903880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.903910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.904330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.904360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.904771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.904806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.905236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.905250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.905705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.905735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.906179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.906210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.906722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.906753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.907285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.907316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.907828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.907857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.908325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.908356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.908560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.908589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.909125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.909156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.909473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.909486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.909982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.910011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.910502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.910532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.911063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.911094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.911551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.911581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.912078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.912110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.912556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.912586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.912839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.915 [2024-07-25 12:13:10.912869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.915 qpair failed and we were unable to recover it. 00:27:23.915 [2024-07-25 12:13:10.913350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.913380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.913864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.913893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.914353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.914383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.914919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.914949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.915496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.915526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.916062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.916098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.916631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.916661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.917147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.917185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.917730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.917759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.918277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.918307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.918785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.918815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.919357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.919388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.919835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.919865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.920375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.920406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.920796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.920825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.921280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.921311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.921618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.921632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.922059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.922088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.922599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.922628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.923161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.923191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.923734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.923764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.924292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.924306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.924748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.924761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.925217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.925231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.925751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.925780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.926182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.926212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.926694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.926723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.927267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.927298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.927831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.927861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.928321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.928351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.928815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.928844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.929290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.929321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.929842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.916 [2024-07-25 12:13:10.929872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.916 qpair failed and we were unable to recover it. 00:27:23.916 [2024-07-25 12:13:10.930327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.930356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.930882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.930916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.931314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.931344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.931792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.931822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.932308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.932338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.932857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.932886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.933313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.933344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.933653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.933682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.934214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.934244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.934768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.934797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.935319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.935349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.935874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.935904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.936447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.936483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.936998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.937028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.937577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.937607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.938075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.938106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.938648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.938677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.939140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.939171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.939635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.939664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.940129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.940160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.940668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.940698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.941231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.941261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.941777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.941807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.942295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.942327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.942854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.942883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.943422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.943452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.943901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.943930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.944387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.944417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.944672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.944701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.945172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.945202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.945716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.945746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.946280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.946311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.946850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.946880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.947334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.917 [2024-07-25 12:13:10.947371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.917 qpair failed and we were unable to recover it. 00:27:23.917 [2024-07-25 12:13:10.947907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.947920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.948330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.948344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.948787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.948816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.949306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.949336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.949850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.949880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.950391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.950421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.950954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.950984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.951444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.951473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.951949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.951979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.952453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.952484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.953016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.953055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.953584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.953614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.954018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.954056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.954583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.954612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.955068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.955099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.955557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.955586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.956119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.956132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.956605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.956635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.957085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.957098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.957538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.957569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.957823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.957852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.958387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.958417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.958959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.958988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.959502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.959533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.959993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.960023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.960611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.960642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.961037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.961078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.961432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.961460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.961992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.962022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.962471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.962501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.963014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.963053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.963514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.963544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.964014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.964051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.964445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.964485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.964967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.918 [2024-07-25 12:13:10.964980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.918 qpair failed and we were unable to recover it. 00:27:23.918 [2024-07-25 12:13:10.965462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.965498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.965841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.965871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.966335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.966365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.966816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.966846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.967315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.967345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.967809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.967838] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.968289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.968339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.968835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.968864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.969323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.969353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.969890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.969919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.970462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.970492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.970973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.971001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.971543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.971574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.971975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.972005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.972496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.972527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.972977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.973007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.973528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.973559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.974073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.974103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.974615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.974645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.975157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.975187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.975722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.975752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.976026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.976072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.976521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.976550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.977040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.977078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.977555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.977584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.978096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.978127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.978610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.978639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.979175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.979210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.919 qpair failed and we were unable to recover it. 00:27:23.919 [2024-07-25 12:13:10.979666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.919 [2024-07-25 12:13:10.979695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.980218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.980248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.980698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.980711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.981158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.981172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.981626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.981655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.982178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.982208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.982669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.982698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.983231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.983262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.983720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.983750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.984295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.984325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.984799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.984829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.985365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.985396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.985933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.985963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.986493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.986524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.987035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.987071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.987582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.987611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.988179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.988210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.988742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.988770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.989336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.989367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.989897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.989926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.990373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.990403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.990936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.990949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.991395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.991425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.991960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.991989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.992465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.992497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.992957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.992986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.993254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.993291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.993764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.993793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.994254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.994283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.994794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.994824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.995334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.995364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.995827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.995856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.996321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.996351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.996802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.996831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.997343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.997357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.997788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.997802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.920 [2024-07-25 12:13:10.998305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.920 [2024-07-25 12:13:10.998336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.920 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:10.998871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:10.998900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:10.999293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:10.999324] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:10.999789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:10.999819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.000287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.000319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.000857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.000886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.001341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.001372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.001827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.001857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.002334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.002364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.002757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.002787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.003241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.003255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.003737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.003768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.004282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.004312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.004723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.004753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.005265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.005295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.005702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.005732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.006243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.006274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.006719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.006749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.007288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.007318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.007853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.007883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.008401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.008431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.008895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.008925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.009488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.009524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.009956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.009986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.010504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.010535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.011008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.011038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.011572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.011602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.012087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.012118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.012524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.012553] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.013087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.013117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.013651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.013681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.014143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.014174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.014718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.014747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.015259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.015289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.015740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.015769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.016279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.921 [2024-07-25 12:13:11.016310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.921 qpair failed and we were unable to recover it. 00:27:23.921 [2024-07-25 12:13:11.016704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.016733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.017207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.017237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.017776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.017805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.018277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.018308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.018778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.018807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.019322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.019352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.019832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.019862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.020403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.020434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.020957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.020987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.021538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.021569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.022104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.022134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.022669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.022698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.023212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.023242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.023494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.023524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.023968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.023997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.024405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.024436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.024971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.025000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.025515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.025545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.026018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.026057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.026512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.026542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.027084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.027114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.027644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.027674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.028209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.028244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.028780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.028809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.029274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.029304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.029780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.029809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.030324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.030355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.030823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.030852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.031117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.031147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.031611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.031641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.032197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.032228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.032694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.032723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.033171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.033201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.033719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.033749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.034258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.034272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.922 [2024-07-25 12:13:11.034707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.922 [2024-07-25 12:13:11.034736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.922 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.035277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.035308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.035830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.035860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.036254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.036284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.036797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.036810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.037322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.037352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.037869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.037898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.038380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.038410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.038953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.038992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.039404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.039418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.039854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.039883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.040422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.040453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.040973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.040987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.041483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.041497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.042084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.042101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.042550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.042564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.043068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.043098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.043611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.043624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.044122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.044153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.044692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.044721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.045155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.045185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.045718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.045747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.046261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.046292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.046750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.046779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.047229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.047259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.047714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.047743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.048162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.048192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.048744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.048774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.049238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.049267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.049779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.049808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.050267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.050297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.050826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.050856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.051414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.051444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.051965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.052004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.052530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.052548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.053064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.053078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.053519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.053532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.053891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.923 [2024-07-25 12:13:11.053904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.923 qpair failed and we were unable to recover it. 00:27:23.923 [2024-07-25 12:13:11.054282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.054313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.054694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.054723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.055191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.055205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.055633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.055662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.056129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.056160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.056735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.056766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.057279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.057309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.057758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.057788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.058253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.058283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.058812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.058842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.059367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.059397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.059930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.059973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.060438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.060468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.060934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.060947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.061327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.061357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.061805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.061834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.062224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.062255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.062864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.062934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.063744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.063783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.064368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.064400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.064910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.064920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.065342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.065353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.065779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.065818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.066281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.066311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.066773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.066803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.067255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.067275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.067725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.067735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.068100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.068173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.068605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.068615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.069071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.924 [2024-07-25 12:13:11.069090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.924 qpair failed and we were unable to recover it. 00:27:23.924 [2024-07-25 12:13:11.069406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.069428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.069853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.069870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.070290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.070308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.070801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.070843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.071308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.071351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.071983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.072028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.072706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.072764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.073271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.073289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.073729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.073747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.074202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.074213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.074649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.074667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.075026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.075041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.075313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.075331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.075770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.075812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.076443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.076462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.076882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.076892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.077310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.077321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.077751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.077781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.078038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.078077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.078560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.078589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.079104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.079135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.079678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.079707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.080173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.080204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.080690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.080720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.081189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.081219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.081753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.081783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.082231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.082262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.082842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.082911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.083495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.083532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.084030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.084074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.084583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.084613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.085127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.085158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.085409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.085439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.085899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.085912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.086335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.086349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.086789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.086819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.925 qpair failed and we were unable to recover it. 00:27:23.925 [2024-07-25 12:13:11.087353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.925 [2024-07-25 12:13:11.087384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.087896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.087925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.088370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.088401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.088874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.088904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.089470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.089500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.089970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.089999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.090523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.090537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.090971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.091000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.091428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.091459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.092018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.092063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.092470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.092483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.092967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.092997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.093539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.093570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.093983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.094012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.094551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.094565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.094992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.095022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.095473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.095512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.096017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.096056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.096570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.096606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.097057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.097071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.097249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.097279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.097834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.097865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.098395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.098424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.098898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.098911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.099353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.099367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.099790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.099804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.100312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.100343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.100824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.100854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.101261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.101275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.101775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.101804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.102289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.102319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.102658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.102687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.103100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.103114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.103549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.103578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.104117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.104148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.104478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.104507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.104975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.926 [2024-07-25 12:13:11.105005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.926 qpair failed and we were unable to recover it. 00:27:23.926 [2024-07-25 12:13:11.105471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.105502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.106013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.106060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.106606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.106636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.107173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.107204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.107672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.107702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.108179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.108209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.108603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.108633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.109100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.109130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.109598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.109633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.110097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.110128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.110638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.110668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.111227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.111257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.111792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.111821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.112273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.112303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.112751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.112781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.113299] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.113328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.113737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.113767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.114278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.114308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.114757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.114786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.115323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.115354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.115824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.115854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.116367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.116397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.116953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.116983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.117445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.117475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.117957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.117986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.118444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.118474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.119013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.119051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.119583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.119612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.120076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.120106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.120555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.120584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.121066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.121096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.121555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.121584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.121986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.122000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.122417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.122448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.122898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.122928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.123395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.123431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.927 [2024-07-25 12:13:11.123942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.927 [2024-07-25 12:13:11.123972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.927 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.124509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.124540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.125054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.125067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.125495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.125524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.126077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.126109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.126642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.126672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.127193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.127223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.127737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.127767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.128229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.128260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.128794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.128824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.129090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.129121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.129613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.129643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.130122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.130151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.130602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.130632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.131039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.131088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.131572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.131601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.132056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.132087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.132583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.132613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.133162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.133192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.133660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.133690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.134226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.134257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.134792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.134822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.135354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.135384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.135895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.135925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.136385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.136415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.136878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.136908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.137445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.137476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.137934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.137948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.138453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.138483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.139017] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.139055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.139584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.139614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.140167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.140197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.140660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.140689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.141153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.141184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.141658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.141687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.142203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.142234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.142758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.142788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.143326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.928 [2024-07-25 12:13:11.143356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.928 qpair failed and we were unable to recover it. 00:27:23.928 [2024-07-25 12:13:11.143915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.143945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.144499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.144534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.145079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.145116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.145581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.145611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.146145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.146174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.146713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.146743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.147279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.147310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.147836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.147849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.148288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.148319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.148832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.148862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.149394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.149424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.149983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.149996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.150445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.150461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.150994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.151027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.151513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.151544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.152034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.152073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.152572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.152602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.153123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.153153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.153681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.153711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.154193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.154223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:23.929 [2024-07-25 12:13:11.154755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:23.929 [2024-07-25 12:13:11.154784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:23.929 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.155323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.155356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.155821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.155851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.156339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.156370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.156866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.156896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.157358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.157372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.157810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.157823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.158322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.158336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.158817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.158847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.159313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.159350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.159803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.159833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.160370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.160400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.160862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.160891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.161373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.161404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.161935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.161965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.162499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.162529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.162919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.162949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.163465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.163495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.164023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.164061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.164509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.164539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.164879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.197 [2024-07-25 12:13:11.164908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.197 qpair failed and we were unable to recover it. 00:27:24.197 [2024-07-25 12:13:11.165360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.165374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.165594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.165607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.166139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.166171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.166640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.166670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.167186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.167217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.167676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.167706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.168199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.168229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.168695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.168725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.169261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.169291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.169780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.169809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.170374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.170404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.170871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.170901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.171411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.171441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.171732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.171762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.172292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.172322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.172772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.172813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.173251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.173265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.173772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.173801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.174337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.174367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.174906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.174936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.175336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.175367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.175752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.175781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.176254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.176299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.176835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.176863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.177347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.177378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.177860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.177889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.178410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.178440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.178982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.179012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.179318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.179349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.179809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.179839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.180373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.180387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.180862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.180875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.181385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.181399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.181882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.181912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.182429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.182459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.182992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.183021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.198 [2024-07-25 12:13:11.183487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.198 [2024-07-25 12:13:11.183517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.198 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.183965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.183995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.184455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.184485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.185011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.185040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.185603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.185633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.185831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.185861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.186419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.186450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.186936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.186966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.187527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.187558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.188095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.188125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.188611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.188640] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.189177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.189207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.189720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.189750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.190216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.190246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.190728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.190758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.191150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.191180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.191712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.191742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.191996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.192026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.192547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.192576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.193110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.193140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.193613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.193644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.194128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.194159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.194602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.194631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.195033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.195082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.195639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.195670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.196235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.196266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.196752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.196781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.197246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.197276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.197811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.197841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.198405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.198435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.198947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.198982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.199362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.199376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.199793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.199823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.200284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.200314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.200727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.200757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.201272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.201302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.201840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.201869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.202410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.199 [2024-07-25 12:13:11.202440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.199 qpair failed and we were unable to recover it. 00:27:24.199 [2024-07-25 12:13:11.202890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.202919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.203400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.203430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.203965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.203994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.204454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.204484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.205024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.205062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.205528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.205557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.205746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.205775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.206322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.206352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.206815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.206845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.207320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.207355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.207819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.207849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.208188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.208219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.208677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.208706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.209212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.209226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.209747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.209777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.210288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.210318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.210861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.210890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.211335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.211348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.211846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.211859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.212276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.212289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.212792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.212804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.213218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.213231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.213662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.213674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.214099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.214112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.214589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.214601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.215053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.215067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.215453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.215482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.215938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.215968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.216452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.216465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.216891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.216920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.217384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.217414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.217883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.217913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.218425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.218455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.218919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.218949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.219416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.219446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.219961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.219990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.220487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.220523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.200 [2024-07-25 12:13:11.220970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.200 [2024-07-25 12:13:11.220984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.200 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.221483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.221496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.221996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.222010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.222447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.222478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.223013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.223051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.223500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.223529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.223935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.223965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.224450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.224481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.225019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.225056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.225573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.225603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.226063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.226092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.226570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.226583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.227012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.227041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.227607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.227638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.228136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.228167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.228679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.228692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.229052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.229067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.229500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.229513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.229926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.229940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.230309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.230340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.230901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.230930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.231494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.231508] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.231880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.231893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.232349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.232380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.232836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.232865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.233337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.233368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.233858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.233887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.234346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.234377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.234627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.234656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.235131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.235161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.235406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.235436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.235915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.235928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.236479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.236510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.236973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.237003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.237409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.237439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.237929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.237943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.238374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.238405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.238933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.238963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.201 qpair failed and we were unable to recover it. 00:27:24.201 [2024-07-25 12:13:11.239431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.201 [2024-07-25 12:13:11.239444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.239925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.239955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.240307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.240338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.240799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.240827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.241368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.241382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.241812] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.241825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.242206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.242236] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.242773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.242802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.243253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.243282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.243780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.243810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.244285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.244329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.244819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.244832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.245307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.245337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.245877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.245907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.246419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.246449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.246964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.246993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.247519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.247550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.248110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.248141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.248673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.248702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.249098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.249112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.249481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.249511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.249988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.250017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.250487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.250517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.251029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.251069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.251515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.251545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.252005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.252035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.252535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.252565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.253037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.253076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.253500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.253529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.254063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.254099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.254635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.202 [2024-07-25 12:13:11.254664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.202 qpair failed and we were unable to recover it. 00:27:24.202 [2024-07-25 12:13:11.255195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.255225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.255766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.255795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.256305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.256335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.256800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.256829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.257291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.257321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.257834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.257864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.258384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.258415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.258857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.258887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.259435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.259465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.259989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.260018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.260487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.260516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.260983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.261012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.261485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.261515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.262121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.262152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.262638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.262668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.263183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.263214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.263659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.263688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.264137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.264167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.264625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.264655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.265191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.265220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.265688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.265717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.266188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.266217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.266680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.266709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.267121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.267150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.267612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.267641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.267980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.268015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.268559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.268589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.269125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.269155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.269636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.269665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.270150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.270180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.270646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.270675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.271208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.271239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.271755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.271785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.272342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.272372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.272910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.272939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.273453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.273484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.273944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.273973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.203 [2024-07-25 12:13:11.274514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.203 [2024-07-25 12:13:11.274544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.203 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.275096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.275126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.275694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.275724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.276263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.276293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.276750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.276779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.277117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.277147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.277676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.277690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.278125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.278155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.278692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.278722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.279120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.279150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.279608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.279638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.280101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.280131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.280602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.280631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.281027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.281067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.281514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.281543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.282117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.282153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.282565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.282594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.283129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.283159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.283672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.283701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.284258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.284287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.284801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.284831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.285368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.285398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.285872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.285901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.286414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.286444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.286888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.286918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.287359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.287389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.287931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.287960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.288353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.288384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.288853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.288882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.289424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.289455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.289936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.289965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.290429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.290459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.290998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.291027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.291498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.291528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.291922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.291963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.292469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.292499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.292957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.292986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.293527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.293558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.204 qpair failed and we were unable to recover it. 00:27:24.204 [2024-07-25 12:13:11.294036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.204 [2024-07-25 12:13:11.294072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.294459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.294488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.294885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.294915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.295452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.295466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.295893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.295922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.296437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.296451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.296948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.296962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.297406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.297420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.297834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.297862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.298397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.298427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.298978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.299008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.299422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.299452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.299920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.299950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.300675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.300707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.301242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.301272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.301742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.301755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.302179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.302210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.302679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.302709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.303130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.303161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.303626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.303654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.304190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.304221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.304701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.304731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.305191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.305221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.305752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.305782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.306308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.306338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.306747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.306776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.307290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.307320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.307709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.307739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.308266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.308296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.308754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.308784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.309325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.309355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.309871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.309884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.310370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.310401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.310927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.310957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.311506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.311536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.312075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.312106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.312637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.312667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.313204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.205 [2024-07-25 12:13:11.313235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.205 qpair failed and we were unable to recover it. 00:27:24.205 [2024-07-25 12:13:11.313728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.313757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.314317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.314347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.314799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.314812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.314982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.314995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.315420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.315451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.315932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.315962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.316513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.316543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.317062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.317098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.317639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.317668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.318149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.318180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.318697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.318726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.319196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.319226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.319706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.319735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.320266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.320279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.320765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.320795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.321258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.321288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.321804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.321833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.322396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.322426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.322884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.322898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.323398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.323428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.323909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.323939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.324411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.324441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.324975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.325004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.325521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.325552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.326009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.326038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.326599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.326628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.327163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.327177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.327658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.327688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.328160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.328191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.328729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.328758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.329228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.329258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.329771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.206 [2024-07-25 12:13:11.329801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.206 qpair failed and we were unable to recover it. 00:27:24.206 [2024-07-25 12:13:11.330331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.330361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.330923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.330952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.331421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.331456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.332019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.332070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.332633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.332662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.333061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.333091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.333561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.333574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.334051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.334065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.334493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.334522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.335062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.335091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.335570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.335600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.336139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.336153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.336533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.336563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.336955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.336984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.337447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.337477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.337990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.338021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.338548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.338578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.339150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.339180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.339704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.339733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.340298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.340328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.340779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.340808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.341344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.341373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.341847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.341876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.342412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.342442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.342984] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.343013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.343428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.343442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.343659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.343672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.344176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.344190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.344649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.344678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.345149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.345163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.345529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.345542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.346047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.346061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.346543] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.346572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.347039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.347076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.347588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.347617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.348177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.348207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.348748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.348777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.207 [2024-07-25 12:13:11.349307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.207 [2024-07-25 12:13:11.349337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.207 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.349793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.349823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.350376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.350405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.350857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.350870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.351380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.351410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.351946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.351975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.352532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.352568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.353135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.353166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.353646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.353674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.354154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.354184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.354698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.354728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.355200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.355230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.355787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.355815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.356323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.356338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.356766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.356780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.357256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.357271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.357754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.357784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.358173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.358204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.358678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.358707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.359265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.359279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.359793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.359807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.360242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.360274] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.360815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.360845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.361364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.361378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.361887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.361900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.362404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.362434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.362985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.363014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.363488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.363518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.363773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.363802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.364339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.364370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.364841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.364870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.365333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.365363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.365867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.365897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.366408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.366443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.366957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.366986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.367520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.367550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.367800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.367830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.208 [2024-07-25 12:13:11.368316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.208 [2024-07-25 12:13:11.368347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.208 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.368885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.368915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.369428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.369458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.369868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.369897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.370360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.370389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.370872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.370912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.371412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.371426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.371842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.371872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.372399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.372429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.372848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.372878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.373337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.373367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.373883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.373912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.374426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.374456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.374933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.374962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.375494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.375524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.375991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.376021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.376572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.376603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.377128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.377158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.377673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.377702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.378214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.378244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.378777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.378806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.379271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.379301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.379780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.379810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.380203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.380243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.380725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.380754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.381205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.381235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.381735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.381764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.382297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.382327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.382888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.382918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.383347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.383377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.383906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.383935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.384500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.384531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.384977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.385008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.385491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.385521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.385997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.386026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.386573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.386603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.387143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.387173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.387629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.387658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.209 [2024-07-25 12:13:11.388182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.209 [2024-07-25 12:13:11.388212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.209 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.388720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.388749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.389272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.389285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.389809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.389822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.390319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.390334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.390768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.390781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.391231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.391261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.391798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.391827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.392365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.392395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.392931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.392961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.393495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.393526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.394040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.394078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.394363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.394398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.394844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.394858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.395362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.395392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.395931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.395960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.396493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.396523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.397064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.397093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.397629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.397658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.398108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.398138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.398695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.398724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.399262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.399293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.399824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.399853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.400440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.400470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.400995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.401024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.401514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.401543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.402089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.402120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.402683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.402713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.403277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.403308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.403708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.403738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.404250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.404280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.404805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.404819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.405308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.405338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.405886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.405924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.406411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.406425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.406853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.406866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.407318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.407348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.210 [2024-07-25 12:13:11.407754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.210 [2024-07-25 12:13:11.407783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.210 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.408319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.408350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.408808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.408837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.409289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.409320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.409843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.409873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.410334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.410364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.410855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.410893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.411317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.411331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.411785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.411815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.412225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.412255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.412639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.412653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.413153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.413167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.413669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.413698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.414093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.414123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.414600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.414629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.415085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.415116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.415592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.415622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.416155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.416185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.416647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.416676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.417143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.417174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.417579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.417609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.418070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.418100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.418574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.418603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.418999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.419029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.419578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.419608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.419861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.419891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.420294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.420325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.420781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.420810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.421293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.421323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.421843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.421857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.422362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.422376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.422805] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.422818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.423266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.423296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.211 qpair failed and we were unable to recover it. 00:27:24.211 [2024-07-25 12:13:11.423763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.211 [2024-07-25 12:13:11.423793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.424274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.424308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.424799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.424828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.425293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.425322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.425818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.425848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.426383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.426413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.426869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.426898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.427447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.427477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.427966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.427995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.428470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.428500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.428961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.428997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.429281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.429312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.429844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.429872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.430132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.430162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.430649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.430678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.431228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.431258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.431719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.431749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.432285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.432315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.432764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.432794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.433341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.433354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.433800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.433813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.434258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.434290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.434863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.434876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.435238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.435251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.435663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.435683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.436110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.436124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.436540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.436554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.437018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.437065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.437567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.437597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.437999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.438029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.212 [2024-07-25 12:13:11.438573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.212 [2024-07-25 12:13:11.438603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.212 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.438857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.438873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.439406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.439437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.439918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.439931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.440399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.440414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.440777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.440790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.441267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.441281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.441781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.441797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.442225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.442255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.442796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.442826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.443309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.443347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.443786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.443816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.444356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.444386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.444943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.444974] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.445431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.445460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.445975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.446004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.446457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.446487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.446936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.446966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.447498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.447529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.448084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.448115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.448634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.448663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.481 [2024-07-25 12:13:11.449183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.481 [2024-07-25 12:13:11.449214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.481 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.449447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.449461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.449915] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.449928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.450415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.450429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.450855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.450885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.451288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.451318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.451831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.451860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.452376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.452406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.452851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.452881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.453356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.453387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.453901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.453930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.454397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.454427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.454958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.454989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.455535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.455565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.456080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.456111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.456596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.456625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.457151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.457181] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.457690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.457720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.458284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.458314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.458775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.458804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.459265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.459295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.459684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.459714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.460228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.460259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.460784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.460813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.461324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.461354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.461867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.461897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.462356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.462386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.462856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.462886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.463335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.463365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.463856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.463896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.464374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.464388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.464868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.464910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.465423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.465453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.466002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.466015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.466486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.466516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.466769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.466799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.482 qpair failed and we were unable to recover it. 00:27:24.482 [2024-07-25 12:13:11.467260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.482 [2024-07-25 12:13:11.467291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.467827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.467856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.468267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.468297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.468768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.468797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.469274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.469305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.469813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.469843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.470325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.470355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.470824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.470853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.471418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.471449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.472011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.472040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.472598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.472630] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.473208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.473239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.473777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.473806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.474312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.474327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.474803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.474817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.475331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.475362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.475963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.475993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.476567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.476581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.477134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.477170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.477731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.477761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.478301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.478332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.478870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.478900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.479246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.479277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.479689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.479718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.480279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.480310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.480842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.480871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.481408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.481439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.481919] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.481948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.482486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.482517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.483091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.483121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.483656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.483686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.484226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.484256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.484823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.484852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.485402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.485433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.485894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.485924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.486459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.486490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.486981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.483 [2024-07-25 12:13:11.487011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.483 qpair failed and we were unable to recover it. 00:27:24.483 [2024-07-25 12:13:11.487559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.487589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.488143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.488174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.488656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.488686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.489214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.489246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.489786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.489815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.490349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.490380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.490894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.490924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.491380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.491411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.491974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.492010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.492509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.492541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.493079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.493110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.493600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.493629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.494186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.494217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.494756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.494786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.495320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.495351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.495864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.495893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.496430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.496472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.497031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.497069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.497641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.497670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.498157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.498188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.498730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.498777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.499335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.499366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.499929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.499959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.500445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.500484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.501009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.501023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.501563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.501595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.502106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.502138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.502677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.502707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.503313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.503343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.503825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.503855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.504334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.504364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.504901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.504930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.505378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.505408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.505828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.505859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.506353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.506383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.484 qpair failed and we were unable to recover it. 00:27:24.484 [2024-07-25 12:13:11.506866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.484 [2024-07-25 12:13:11.506901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.507386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.507417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.507951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.507981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.508456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.508487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.509029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.509074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.509629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.509659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.510157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.510188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.510652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.510682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.511195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.511226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.511761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.511790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.512320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.512334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.512755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.512769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.513202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.513233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.513782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.513812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.514392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.514423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.515006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.515036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.515508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.515539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.516010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.516039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.516620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.516650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.517203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.517235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.517803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.517832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.518376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.518407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.518897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.518926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.519429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.519459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.520019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.520056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.520616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.520646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.521197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.521228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.521789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.521818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.522402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.522434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.522916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.522946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.523478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.523509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.524022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.524059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.524518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.524547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.525090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.525120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.525709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.525739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.526249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.526279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.526807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.526836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.485 [2024-07-25 12:13:11.527337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.485 [2024-07-25 12:13:11.527367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.485 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.527928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.527958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.528521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.528552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.529097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.529128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.529694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.529724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.530239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.530270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.530785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.530815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.531281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.531312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.531848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.531877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.532441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.532471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.533008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.533038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.533587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.533618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.534163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.534193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.534783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.534813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.535397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.535428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.536019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.536055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.536637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.536666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.537234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.537265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.537861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.537891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.538390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.538421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.538887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.538917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.539456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.539487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.540030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.540077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.540605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.540635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.541201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.541232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.541792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.541822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.542334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.542366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.542910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.542941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.543431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.486 [2024-07-25 12:13:11.543461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.486 qpair failed and we were unable to recover it. 00:27:24.486 [2024-07-25 12:13:11.544025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.544064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.544627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.544656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.545220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.545264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.545730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.545759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.546249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.546264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.546781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.546811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.547298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.547328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.547877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.547907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.548486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.548516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.549022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.549072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.549595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.549624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.550167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.550200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.550689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.550719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.551205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.551220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.551743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.551773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.552325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.552356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.552881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.552912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.553388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.553419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.553987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.554016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.554585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.554616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.555159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.555189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.555757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.555787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.556357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.556387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.556975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.557005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.557554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.557585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.558136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.558167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.558635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.558664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.559149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.559164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.559706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.559737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.560278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.560298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.560844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.560875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.561446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.561478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.562066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.562081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.562655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.562685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.563245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.487 [2024-07-25 12:13:11.563276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.487 qpair failed and we were unable to recover it. 00:27:24.487 [2024-07-25 12:13:11.563820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.563849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.564350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.564382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.564922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.564936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.565737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.565770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.566281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.566311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.566855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.566884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.567405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.567436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.568003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.568034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.568595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.568626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.569094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.569108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.569559] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.569589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.570136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.570166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.570737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.570773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.571295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.571309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.571843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.571873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.572285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.572316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.572791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.572814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.573332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.573363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.573878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.573892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.574406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.574437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.574943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.574957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.575469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.575501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.575994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.576024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.576739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.576772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.577341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.577373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.577998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.578012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.578657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.578705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.579262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.579304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.580139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.580155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.580699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.580713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.581155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.581170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.581565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.581579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.582018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.582059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.582609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.582639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.583380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.583414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.583914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.583945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.584421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.584435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.584923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.488 [2024-07-25 12:13:11.584937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.488 qpair failed and we were unable to recover it. 00:27:24.488 [2024-07-25 12:13:11.585463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.585493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.585977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.586007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.586570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.586601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.587170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.587202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.587772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.587802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.588378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.588409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.588914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.588943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.589472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.589503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.590072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.590103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.590708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.590738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.591432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.591464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.592029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.592069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.592579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.592609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.593037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.593082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.593576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.593606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.594131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.594162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.594688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.594718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.595262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.595277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.595745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.595760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.596315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.596329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.596871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.596901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.597390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.597420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.597906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.597936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.598434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.598465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.599007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.599060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.599550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.599579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.600055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.600070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.600557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.600571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.601062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.601077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.601602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.601632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.602210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.602241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.602849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.602879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.603446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.603461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.603976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.604007] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.604520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.604551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.605057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.489 [2024-07-25 12:13:11.605088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.489 qpair failed and we were unable to recover it. 00:27:24.489 [2024-07-25 12:13:11.605594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.605624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.606104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.606135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.606695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.606725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.607291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.607322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.607895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.607926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.608485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.608516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.609097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.609127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.609687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.609717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.610290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.610322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.610886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.610916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.611467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.611498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.612094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.612125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.612708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.612722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.613159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.613191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.613685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.613714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.614285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.614303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.614738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.614752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.615269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.615300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.615765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.615795] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.616347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.616378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.616875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.616904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.617447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.617478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.618070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.618102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.618704] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.618734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.619307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.619337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.619887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.619917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.620519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.620550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.621156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.621188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.621779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.621810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.622395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.622426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.623007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.623037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.623625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.623656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.624253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.624284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.624859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.624890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.625392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.625423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.490 [2024-07-25 12:13:11.625970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.490 [2024-07-25 12:13:11.626000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.490 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.626555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.626585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.627130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.627161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.627719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.627749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.628330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.628345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.628889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.628919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.629484] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.629515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.629992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.630027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.630634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.630666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.631241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.631272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.631857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.631887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.632432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.632463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.632963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.632994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.633557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.633588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.634168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.634199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.634814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.634843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.635421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.635452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.636036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.636075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.636591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.636621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.637175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.637207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.637758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.637789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.638326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.638357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.638909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.638939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.639486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.639517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.640091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.640123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.640653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.640683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.641265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.641280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.641778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.641808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.642386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.642418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.643009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.643039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.643538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.491 [2024-07-25 12:13:11.643568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.491 qpair failed and we were unable to recover it. 00:27:24.491 [2024-07-25 12:13:11.644098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.644131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.644687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.644716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.645226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.645257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.645810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.645840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.646387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.646419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.646967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.646997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.647593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.647625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.648471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.648506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.649088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.649119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.649610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.649641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.650111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.650141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.650609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.650639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.651185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.651200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.651756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.651786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.652293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.652325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.652895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.652925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.653472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.653503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.654028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.654069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.654541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.654570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.655056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.655087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.655643] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.655673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.656274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.656305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.656882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.656915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.657497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.657529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.658077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.658109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.658658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.658688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.659156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.659187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.659703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.659734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.660307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.660339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.660892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.660923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.661523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.661554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.662057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.662088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.662572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.662604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.663156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.663188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.663756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.663787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.492 qpair failed and we were unable to recover it. 00:27:24.492 [2024-07-25 12:13:11.664364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.492 [2024-07-25 12:13:11.664396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.664873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.664903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.665453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.665485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.666088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.666120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.666708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.666739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.667316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.667348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.667827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.667858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.668338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.668369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.668933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.668963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.669514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.669551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.670080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.670112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.670593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.670624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.671106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.671138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.671651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.671666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.672190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.672223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.672831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.672862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.673458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.673474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.674006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.674020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.674567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.674599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.675133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.675165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.675673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.675703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.676246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.676279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.676809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.676840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.677443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.677458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.678025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.678066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.678652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.678683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.679286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.679318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.679890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.679922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.680463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.680495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.681064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.681096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.681676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.681708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.682274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.682291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.682734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.682750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.683271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.683303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.683857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.683889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.684471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.684503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.685069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.685108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.685645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.493 [2024-07-25 12:13:11.685676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.493 qpair failed and we were unable to recover it. 00:27:24.493 [2024-07-25 12:13:11.686233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.686273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.686776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.686808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.687312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.687344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.687848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.687878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.688429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.688461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.688873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.688903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.689286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.689318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.689792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.689823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.690189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.690222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.690725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.690756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.691281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.691313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.691785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.691817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.692367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.692401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.692816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.692847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.693277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.693308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.693846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.693877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.694409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.694441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.694860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.694891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.695444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.695476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.695948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.695979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.696501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.696517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.696953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.696968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.697213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.697228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.697748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.697763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.698291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.698307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.698745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.698760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.699211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.699227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.699685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.699715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.700169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.700184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.700682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.700713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.701105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.701136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.701605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.701636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.702131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.702162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.702692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.702723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.494 [2024-07-25 12:13:11.703094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.494 [2024-07-25 12:13:11.703127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.494 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.703633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.703664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.704072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.704104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.704816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.704848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.705378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.705410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.705809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.705823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.706274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.706290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.706737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.706751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.706932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.706962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.707506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.707537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.708023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.708069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.708551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.708580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.708951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.708993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.709478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.709511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.710061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.710093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.710635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.710666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.711196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.711227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.711706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.711736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.712236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.712267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.713065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.713099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.713603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.713634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.714065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.714096] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.714514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.714544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.715069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.715100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.715554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.715584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.716040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.716082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.716487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.716517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.716943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.716956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.717336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.717351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.717802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.717832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.718321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.718352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.718770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.718801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.719277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.719314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.719803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.719819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.720314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.720347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.720861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.720891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.721415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.495 [2024-07-25 12:13:11.721446] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.495 qpair failed and we were unable to recover it. 00:27:24.495 [2024-07-25 12:13:11.722018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.496 [2024-07-25 12:13:11.722063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.496 qpair failed and we were unable to recover it. 00:27:24.496 [2024-07-25 12:13:11.722552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.496 [2024-07-25 12:13:11.722583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.496 qpair failed and we were unable to recover it. 00:27:24.763 [2024-07-25 12:13:11.723171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.763 [2024-07-25 12:13:11.723205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.763 qpair failed and we were unable to recover it. 00:27:24.763 [2024-07-25 12:13:11.723786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.763 [2024-07-25 12:13:11.723802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.763 qpair failed and we were unable to recover it. 00:27:24.763 [2024-07-25 12:13:11.724331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.763 [2024-07-25 12:13:11.724365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.763 qpair failed and we were unable to recover it. 00:27:24.763 [2024-07-25 12:13:11.724868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.763 [2024-07-25 12:13:11.724899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.763 qpair failed and we were unable to recover it. 00:27:24.763 [2024-07-25 12:13:11.725453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.763 [2024-07-25 12:13:11.725484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.763 qpair failed and we were unable to recover it. 00:27:24.763 [2024-07-25 12:13:11.726208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.763 [2024-07-25 12:13:11.726241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.763 qpair failed and we were unable to recover it. 00:27:24.763 [2024-07-25 12:13:11.726653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.763 [2024-07-25 12:13:11.726668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.763 qpair failed and we were unable to recover it. 00:27:24.763 [2024-07-25 12:13:11.727122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.763 [2024-07-25 12:13:11.727137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.763 qpair failed and we were unable to recover it. 00:27:24.763 [2024-07-25 12:13:11.727631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.763 [2024-07-25 12:13:11.727661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.763 qpair failed and we were unable to recover it. 00:27:24.763 [2024-07-25 12:13:11.728165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.763 [2024-07-25 12:13:11.728198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.763 qpair failed and we were unable to recover it. 00:27:24.763 [2024-07-25 12:13:11.728629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.763 [2024-07-25 12:13:11.728659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.763 qpair failed and we were unable to recover it. 00:27:24.763 [2024-07-25 12:13:11.729153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.763 [2024-07-25 12:13:11.729183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.763 qpair failed and we were unable to recover it. 00:27:24.763 [2024-07-25 12:13:11.729734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.763 [2024-07-25 12:13:11.729765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.763 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.730312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.730345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.730965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.730995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.731429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.731444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.731982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.731996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.732486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.732518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.733008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.733039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.733632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.733664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.734190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.734228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.734729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.734759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.735256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.735288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.735712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.735743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.736294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.736326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.736800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.736831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.737330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.737361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.737896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.737927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.738397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.738428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.738909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.738940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.739510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.739541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.740113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.740145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.740623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.740654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.741209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.741241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.741800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.741829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.742402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.764 [2024-07-25 12:13:11.742434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.764 qpair failed and we were unable to recover it. 00:27:24.764 [2024-07-25 12:13:11.742912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.806 [2024-07-25 12:13:11.742942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.806 qpair failed and we were unable to recover it. 00:27:24.806 [2024-07-25 12:13:11.743496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.806 [2024-07-25 12:13:11.743527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.806 qpair failed and we were unable to recover it. 00:27:24.806 [2024-07-25 12:13:11.743965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.806 [2024-07-25 12:13:11.743995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.806 qpair failed and we were unable to recover it. 00:27:24.806 [2024-07-25 12:13:11.744508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.806 [2024-07-25 12:13:11.744540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.806 qpair failed and we were unable to recover it. 00:27:24.806 [2024-07-25 12:13:11.745089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.806 [2024-07-25 12:13:11.745121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.806 qpair failed and we were unable to recover it. 00:27:24.806 [2024-07-25 12:13:11.745650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.806 [2024-07-25 12:13:11.745680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.806 qpair failed and we were unable to recover it. 00:27:24.806 [2024-07-25 12:13:11.746218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.806 [2024-07-25 12:13:11.746249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.806 qpair failed and we were unable to recover it. 00:27:24.806 [2024-07-25 12:13:11.746806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.806 [2024-07-25 12:13:11.746837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.806 qpair failed and we were unable to recover it. 00:27:24.806 [2024-07-25 12:13:11.747380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.806 [2024-07-25 12:13:11.747411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.806 qpair failed and we were unable to recover it. 00:27:24.806 [2024-07-25 12:13:11.747944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.806 [2024-07-25 12:13:11.747959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.806 qpair failed and we were unable to recover it. 00:27:24.806 [2024-07-25 12:13:11.748404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.806 [2024-07-25 12:13:11.748435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.806 qpair failed and we were unable to recover it. 00:27:24.806 [2024-07-25 12:13:11.748960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.806 [2024-07-25 12:13:11.748996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.806 qpair failed and we were unable to recover it. 00:27:24.806 [2024-07-25 12:13:11.749502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.806 [2024-07-25 12:13:11.749533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.806 qpair failed and we were unable to recover it. 00:27:24.806 [2024-07-25 12:13:11.750199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.806 [2024-07-25 12:13:11.750214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.806 qpair failed and we were unable to recover it. 00:27:24.806 [2024-07-25 12:13:11.750701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.806 [2024-07-25 12:13:11.750732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.806 qpair failed and we were unable to recover it. 00:27:24.806 [2024-07-25 12:13:11.751317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.806 [2024-07-25 12:13:11.751348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.806 qpair failed and we were unable to recover it. 00:27:24.806 [2024-07-25 12:13:11.751933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.751964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.752538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.752570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.753074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.753106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.753637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.753667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.754183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.754214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.754705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.754736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.755303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.755318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.755693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.755724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.756190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.756222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.756645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.756660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.757105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.757135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.757667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.757697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.758173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.758205] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.758632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.758662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.759196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.759227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.759711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.759741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.760311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.760342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.760930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.760959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.761456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.761487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.762063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.762095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.762595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.762625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.763152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.763167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.763658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.763689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.764243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.764275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.764890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.764920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.765334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.807 [2024-07-25 12:13:11.765364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.807 qpair failed and we were unable to recover it. 00:27:24.807 [2024-07-25 12:13:11.765895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.765925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.766461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.766476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.767038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.767064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.767494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.767525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.768041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.768090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.768620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.768654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.769201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.769233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.769787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.769818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.770337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.770368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.770878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.770909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.771481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.771513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.772099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.772131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.772632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.772662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.773193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.773224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.773806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.773837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.774389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.774421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.775063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.775093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.775500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.775530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.776069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.776101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.776657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.776687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.777298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.777330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.777752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.777766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.778289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.778304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.778752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.778767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.779308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.779339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.779765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.808 [2024-07-25 12:13:11.779796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.808 qpair failed and we were unable to recover it. 00:27:24.808 [2024-07-25 12:13:11.780351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.780382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.780809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.780839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.781400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.781431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.782003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.782034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.782471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.782501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.783185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.783216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.783639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.783654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.784236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.784251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.784769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.784783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.785227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.785242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.785652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.785681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.786162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.786200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.786722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.786753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.787359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.787389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.787927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.787942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.788487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.788518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.789092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.789124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.789631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.789661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.790209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.790240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.790738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.790768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.791268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.791300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.791709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.791723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.792233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.792265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.792700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.792731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.793273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.793304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.809 [2024-07-25 12:13:11.793729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.809 [2024-07-25 12:13:11.793759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.809 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.794246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.794277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.794693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.794724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.795255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.795286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.795735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.795765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.796275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.796305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.796925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.796939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.797413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.797428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.797879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.797894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.798313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.798328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.798734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.798764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.799258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.799290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.799889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.799920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.800440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.800477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.801108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.801140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.801568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.801598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.802075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.802106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.802588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.802618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.803136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.803167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.803713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.803728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.804184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.804199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.804738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.804753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.805280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.805295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.805746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.805761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.806370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.806384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.807098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.807129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.810 [2024-07-25 12:13:11.807633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.810 [2024-07-25 12:13:11.807648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.810 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.808164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.808180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.808735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.808766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.809376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.809408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.809931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.809945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.810429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.810444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.810861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.810891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.811343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.811374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.811786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.811800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.812249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.812281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.812844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.812859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.813234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.813249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.813655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.813685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.814253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.814268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.814742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.814772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.815328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.815359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.815836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.815853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.816263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.816279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.816803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.816834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.817327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.811 [2024-07-25 12:13:11.817358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.811 qpair failed and we were unable to recover it. 00:27:24.811 [2024-07-25 12:13:11.817790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.817820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.818686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.818774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.819373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.819414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.819878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.819911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.820670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.820705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.821270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.821303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.821800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.821814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.822308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.822340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.822831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.822862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.823573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.823607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.824125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.824157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.824715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.824745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.825214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.825246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.825768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.825782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.826304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.826319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.826779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.826809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.827300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.827332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.827759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.827789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.828332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.828364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.828905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.828935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.829427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.829458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.830041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.830094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.830528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.830558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.831083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.831098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.831601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.831615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.832164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.832179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.812 qpair failed and we were unable to recover it. 00:27:24.812 [2024-07-25 12:13:11.832575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.812 [2024-07-25 12:13:11.832605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.833193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.833224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.833725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.833754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.834232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.834263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.834684] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.834715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.835265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.835297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.835743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.835773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.836262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.836303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.836796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.836826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.837308] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.837345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.837833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.837863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.838426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.838458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.838874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.838905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.839436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.839467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.839902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.839933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.840420] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.840452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.840996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.841025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.841757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.841791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.842457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.842473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.842938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.842968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.843472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.843503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.844106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.844137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.844713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.844743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.845239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.845270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.845825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.845855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.846430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.846462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.813 qpair failed and we were unable to recover it. 00:27:24.813 [2024-07-25 12:13:11.846949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.813 [2024-07-25 12:13:11.846980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.847557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.847588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.848071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.848087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.848490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.848520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.849115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.849146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.849654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.849684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.850173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.850204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.850734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.850765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.851388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.851420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.851926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.851957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.852438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.852476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.852904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.852934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.853429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.853460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.854128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.854159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.854646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.854677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.855168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.855200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.855697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.855727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.856320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.856352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.856892] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.856922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.857459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.857490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.858067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.858082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.858585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.858600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.859092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.859108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.859542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.859556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.860004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.860019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.860488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.860504] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.814 qpair failed and we were unable to recover it. 00:27:24.814 [2024-07-25 12:13:11.860893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.814 [2024-07-25 12:13:11.860908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.861414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.861429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.861916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.861931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.862423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.862438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.862911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.862926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.863448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.863464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.863850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.863864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.864368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.864383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.864885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.864901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.865347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.865363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.865809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.865823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.866357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.866375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.866855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.866885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.867795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.867818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.868384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.868402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.869090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.869107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.869510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.869526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.870057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.870086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.870487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.870502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.871005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.871019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.871495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.871511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.871991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.872023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.872483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.872516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.873018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.873033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.873506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.873521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.874225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.874261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.815 qpair failed and we were unable to recover it. 00:27:24.815 [2024-07-25 12:13:11.874809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.815 [2024-07-25 12:13:11.874839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.875335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.875367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.875808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.875839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.876402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.876433] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.877016] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.877032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.877428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.877443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.877890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.877904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.878405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.878437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.878858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.878888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.879358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.879390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.879819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.879834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.880356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.880389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.880828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.880843] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.881381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.881412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.881899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.881928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.882444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.882459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.882822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.882853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.883403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.883418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.883823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.883853] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.884390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.884422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.884831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.884862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.885397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.885412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.885871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.885902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.886439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.886470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.886973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.887003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.887590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.887622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.816 qpair failed and we were unable to recover it. 00:27:24.816 [2024-07-25 12:13:11.888235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.816 [2024-07-25 12:13:11.888267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.888694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.888724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.889153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.889184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.889679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.889709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.890198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.890229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.890667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.890697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.891265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.891297] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.891790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.891819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.892293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.892308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.892686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.892701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.893263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.893278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.893657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.893671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.894135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.894167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.894624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.894654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.895246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.895278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.895756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.895786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.896380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.896411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.896844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.896873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.897447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.897478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.897907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.897938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.898482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.898513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.899116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.899148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.899634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.899664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.900157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.900189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.900673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.900703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.901193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.901225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.817 qpair failed and we were unable to recover it. 00:27:24.817 [2024-07-25 12:13:11.901736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.817 [2024-07-25 12:13:11.901766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.902272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.902309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.902788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.902817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.903311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.903342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.903770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.903801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.904379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.904425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.904969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.905000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.905516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.905547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.906110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.906142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.906627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.906658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.907209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.907241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.907730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.907760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.908256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.908286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.908797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.908827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.909410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.909441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.909856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.909871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.910332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.910347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.910758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.910773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.911305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.911321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.911725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.911740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.912196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.912227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.912829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.818 [2024-07-25 12:13:11.912857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.818 qpair failed and we were unable to recover it. 00:27:24.818 [2024-07-25 12:13:11.913370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.913401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.913979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.914009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.914508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.914540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.915037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.915085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.915601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.915632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.916232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.916264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.916795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.916829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.917371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.917402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.917823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.917852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.918333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.918364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.918857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.918886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.919366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.919397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.919813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.919842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.920416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.920447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.921072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.921103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.921638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.921668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.922230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.922261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.922753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.922782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.923309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.923341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.923920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.923951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.924451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.924482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.925014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.925056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.925553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.925583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.926135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.926166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.926648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.926677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.927209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.927225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.927738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.927767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.928300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.928332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.819 [2024-07-25 12:13:11.928824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.819 [2024-07-25 12:13:11.928854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.819 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.929337] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.929368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.929781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.929810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.930370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.930385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.930828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.930844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.931349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.931379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.931878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.931908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.932407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.932438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.932998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.933029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.933527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.933558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.934135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.934166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.934642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.934672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.935229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.935260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.935690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.935719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.936212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.936244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.936745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.936776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.937245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.937260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.937658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.937687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.938223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.938254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.938706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.938737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.939211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.939243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.939730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.939758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.940327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.940358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.940883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.940912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.941401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.941432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.941930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.941959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.942558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.942590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.943078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.943109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.943607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.820 [2024-07-25 12:13:11.943637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.820 qpair failed and we were unable to recover it. 00:27:24.820 [2024-07-25 12:13:11.944184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.944217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.944696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.944726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.945288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.945319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.945819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.945849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.946386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.946417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.946991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.947022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.947542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.947574] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.948084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.948115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.948666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.948696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.949190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.949221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.949768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.949798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.950405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.950436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.950920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.950949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.951500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.951533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.952096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.952128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.952818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.952848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.953442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.953456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.954030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.954075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.954599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.954629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.955188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.955219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.955721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.955750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.956305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.956336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.956772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.956802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.957297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.957328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.957918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.957947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.958454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.958485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.958980] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.959010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.959584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.959616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.960217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.960248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.821 [2024-07-25 12:13:11.960740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.821 [2024-07-25 12:13:11.960755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.821 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.961276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.961307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.961917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.961948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.962463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.962493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.963033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.963075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.963515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.963545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.964105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.964137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.964629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.964659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.965209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.965241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.965671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.965700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.966227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.966258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.966772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.966802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.967353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.967384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.967886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.967915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.968473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.968505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.969010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.969054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.969537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.969568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.970173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.970203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.970735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.970765] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.971341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.971371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.971880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.971910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.972473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.972505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.973001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.973031] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.973524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.973554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.974100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.974132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.974567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.974597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.975169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.975200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.976008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.976041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.976500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.976537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.977031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.977076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.977591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.822 [2024-07-25 12:13:11.977621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.822 qpair failed and we were unable to recover it. 00:27:24.822 [2024-07-25 12:13:11.978097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.978129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.978612] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.978643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.979199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.979230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.979766] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.979796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.980221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.980253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.980761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.980792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.981292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.981323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.981896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.981928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.982508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.982539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.983026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.983079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.983508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.983538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.984117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.984157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.984573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.984604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.985392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.985425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.985971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.986001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.986502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.986535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.987032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.987077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.987514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.987545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.988055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.988087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.988655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.988671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.989206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.989237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.989721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.989751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.990333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.990365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.990873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.990903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.991372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.991403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.991888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.991919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.992478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.992494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.992957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.992971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.993504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.993536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.994112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.994144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.823 qpair failed and we were unable to recover it. 00:27:24.823 [2024-07-25 12:13:11.994689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.823 [2024-07-25 12:13:11.994718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:11.995290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:11.995320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:11.995754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:11.995784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:11.996339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:11.996373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:11.996882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:11.996911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:11.997457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:11.997488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:11.997914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:11.997944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:11.998475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:11.998507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:11.999075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:11.999105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:11.999641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:11.999656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:12.000179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:12.000195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:12.000647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:12.000677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:12.001283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:12.001298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:12.001712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:12.001747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:12.002269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:12.002300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:12.002773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:12.002803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:12.003345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:12.003376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:12.003988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:12.004003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:12.004483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:12.004498] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:12.004897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:12.004911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:24.824 qpair failed and we were unable to recover it. 00:27:24.824 [2024-07-25 12:13:12.005431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:24.824 [2024-07-25 12:13:12.005463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.093 qpair failed and we were unable to recover it. 00:27:25.093 [2024-07-25 12:13:12.005960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.093 [2024-07-25 12:13:12.005991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.093 qpair failed and we were unable to recover it. 00:27:25.093 [2024-07-25 12:13:12.006464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.093 [2024-07-25 12:13:12.006500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.093 qpair failed and we were unable to recover it. 00:27:25.093 [2024-07-25 12:13:12.006940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.093 [2024-07-25 12:13:12.006971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.093 qpair failed and we were unable to recover it. 00:27:25.093 [2024-07-25 12:13:12.007481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.093 [2024-07-25 12:13:12.007495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.093 qpair failed and we were unable to recover it. 00:27:25.093 [2024-07-25 12:13:12.007889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.093 [2024-07-25 12:13:12.007904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.093 qpair failed and we were unable to recover it. 00:27:25.093 [2024-07-25 12:13:12.008477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.093 [2024-07-25 12:13:12.008492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.093 qpair failed and we were unable to recover it. 00:27:25.093 [2024-07-25 12:13:12.008964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.093 [2024-07-25 12:13:12.008979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.093 qpair failed and we were unable to recover it. 00:27:25.093 [2024-07-25 12:13:12.009447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.093 [2024-07-25 12:13:12.009479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.093 qpair failed and we were unable to recover it. 00:27:25.093 [2024-07-25 12:13:12.010115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.093 [2024-07-25 12:13:12.010146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.093 qpair failed and we were unable to recover it. 00:27:25.093 [2024-07-25 12:13:12.010631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.093 [2024-07-25 12:13:12.010661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.093 qpair failed and we were unable to recover it. 00:27:25.093 [2024-07-25 12:13:12.011159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.093 [2024-07-25 12:13:12.011191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.093 qpair failed and we were unable to recover it. 00:27:25.093 [2024-07-25 12:13:12.011687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.093 [2024-07-25 12:13:12.011717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.093 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.012226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.012242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.012742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.012773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.013260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.013291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.013796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.013826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.014404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.014436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.014996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.015026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.015619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.015650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.016233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.016265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.016703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.016733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.017250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.017282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.017834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.017863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.018340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.018371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.018848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.018863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.019429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.019460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.019908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.019945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.020498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.020529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.021096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.021114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.021642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.021672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.022240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.022272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.022701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.022730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.023288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.023319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.023824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.023854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.024343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.024374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.024923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.024954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.025459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.025490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.025947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.025987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.026490] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.026521] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.026953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.026982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.027497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.027529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.028100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.028132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.028611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.028626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.029136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.029168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.029719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.029749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.030294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.030309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.030771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.030785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.094 [2024-07-25 12:13:12.031183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.094 [2024-07-25 12:13:12.031214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.094 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.031740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.031769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.032279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.032310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.032748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.032778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.033267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.033299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.033725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.033755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.034263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.034295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.034782] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.034812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.035356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.035394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.035872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.035902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.036428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.036460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.036885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.036915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.037461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.037491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.037974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.038005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.038469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.038501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.039067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.039099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.039613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.039644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.040114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.040145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.040707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.040739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.041306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.041337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.041841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.041872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.042479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.042511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.043109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.043142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.043653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.043684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.044176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.044191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.044640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.044654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.045169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.045201] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.045621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.045651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.046139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.046171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.046678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.046708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.047201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.047231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.047663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.047693] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.048233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.048249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.048710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.048725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.049238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.049253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.049702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.049717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.050265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.095 [2024-07-25 12:13:12.050280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.095 qpair failed and we were unable to recover it. 00:27:25.095 [2024-07-25 12:13:12.050769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.050799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.051288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.051302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.051703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.051718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.052248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.052280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.052781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.052812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.053293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.053325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.053802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.053816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.054315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.054348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.054770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.054801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.055348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.055379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.055809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.055840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.056318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.056349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.056880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.056911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.057333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.057364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.057855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.057886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.058437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.058452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.059875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.059911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.060473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.060490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.060951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.060966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.061472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.061487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.062131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.062163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.062694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.062725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.063223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.063254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.063686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.063716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.064238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.064253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.064747] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.064762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.065187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.065219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.065719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.065750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.066275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.066289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.066752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.066791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.067353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.067384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.067804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.067818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.068335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.068351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.068790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.068805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.069324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.096 [2024-07-25 12:13:12.069356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.096 qpair failed and we were unable to recover it. 00:27:25.096 [2024-07-25 12:13:12.069848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.069878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.070419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.070451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.070957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.070994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.071491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.071506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.071935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.071952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.072489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.072520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.073079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.073111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.073546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.073560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.073949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.073962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.074455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.074470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.075012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.075070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.075582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.075612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.076219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.076234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.076673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.076688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.077269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.077284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.077690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.077705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.078121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.078136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.078591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.078621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.079199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.079213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.079773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.079803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.080376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.080407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.080900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.080929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.081440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.081472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.081995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.082009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.082431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.082462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.082886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.082915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.083430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.083460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.084037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.084064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.084567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.084597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.085129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.085144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.085716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.085746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.086298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.086332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.086734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.086764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.087240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.087255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.087653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.087683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.088165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.088196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.088669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.097 [2024-07-25 12:13:12.088699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.097 qpair failed and we were unable to recover it. 00:27:25.097 [2024-07-25 12:13:12.089129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.089161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.089649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.089680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.090240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.090271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.090709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.090741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.091080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.091111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.091688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.091719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.092262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.092293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.092776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.092808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.093335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.093369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.093859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.093888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.094478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.094509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.094953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.094995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.095444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.095459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.095974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.096004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.096515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.096546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.097064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.097095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.097582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.097612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.098096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.098128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.098624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.098653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.099220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.099251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.099739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.099769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.100313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.100356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.100788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.100818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.101357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.101389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.101904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.101933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.102495] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.102526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.103041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.103083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.103512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.103542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.103963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.098 [2024-07-25 12:13:12.103993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.098 qpair failed and we were unable to recover it. 00:27:25.098 [2024-07-25 12:13:12.104598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.104628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.105133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.105165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.105658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.105688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.106232] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.106263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.106748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.106777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.107346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.107377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.107889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.107920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.108474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.108506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.109117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.109148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.109575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.109589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.110147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.110178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.110613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.110643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.111214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.111246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.111725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.111754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.112240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.112286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.112715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.112745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.113222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.113253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.113696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.113726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.114313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.114343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.114820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.114850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.115319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.115350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.115821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.115851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.116418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.116449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.116927] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.116957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.117498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.117530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.118090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.118132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.118651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.118665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.119175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.119206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.119743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.119773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.120353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.120384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.120910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.120941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.121397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.121429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.121860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.121891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.122488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.122525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.123117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.099 [2024-07-25 12:13:12.123147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.099 qpair failed and we were unable to recover it. 00:27:25.099 [2024-07-25 12:13:12.123645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.123677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.124245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.124276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.124701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.124731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.125160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.125175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.125617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.125647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.126159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.126192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.126748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.126778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.127321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.127336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.127833] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.127848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.128385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.128416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.128991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.129021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.129517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.129549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.130134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.130166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.130712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.130741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.131265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.131281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.131811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.131841] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.132569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.132601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.133191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.133223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.133797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.133827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.134375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.134407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.134925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.134956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.135382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.135412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.135886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.135916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.136427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.136458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.136952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.136981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.137421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.137459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.138015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.138057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.138533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.138563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.139134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.139165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.139598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.139628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.140138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.140170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.140569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.140583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.141113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.141144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.141636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.100 [2024-07-25 12:13:12.141666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.100 qpair failed and we were unable to recover it. 00:27:25.100 [2024-07-25 12:13:12.142149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.142180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.142614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.142643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.143088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.143120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.143651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.143680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.144227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.144258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.144740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.144770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.145252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.145284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.145911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.145941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.146429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.146459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.147023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.147065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.147569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.147600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.148077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.148109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.148592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.148622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.149130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.149162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.149631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.149660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.150161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.150193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.150677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.150707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.151222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.151241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.151703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.151742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.152338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.152373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.152965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.152996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.153443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.153474] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.154039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.154080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.154587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.154621] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.155195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.155211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.155706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.155736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.156280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.156312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.156809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.156839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.157365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.157396] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.157898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.157927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.158426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.158457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.158968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.159000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.159580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.159611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.160190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.160222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.160720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.101 [2024-07-25 12:13:12.160750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.101 qpair failed and we were unable to recover it. 00:27:25.101 [2024-07-25 12:13:12.161319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.161349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.161830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.161860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.162338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.162369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.162849] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.162879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.163390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.163420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.163974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.164004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.164436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.164468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.165074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.165105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.165680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.165711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.166285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.166316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.166798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.166829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.167371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.167402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.167832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.167862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.168344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.168375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.168866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.168895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.169384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.169416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.169896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.169925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.170476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.170507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.171125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.171156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.171690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.171720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.172184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.172214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.172764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.172794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.173436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.173467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.173999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.174029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.174592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.174623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.175218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.175249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.175682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.175711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.176270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.176300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.176879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.176909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.177409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.177440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.177996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.178026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.178620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.178650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.179191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.179222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.179765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.102 [2024-07-25 12:13:12.179794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.102 qpair failed and we were unable to recover it. 00:27:25.102 [2024-07-25 12:13:12.180343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.180375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.180922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.180952] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.181528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.181559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.182081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.182112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.182671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.182701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.183294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.183308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.183770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.183799] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.184350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.184393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.184867] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.184897] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.185387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.185418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.185970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.186000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.186517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.186549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.187041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.187084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.187639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.187668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.188192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.188224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.188791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.188821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.189376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.189408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.189941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.189977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.190531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.190562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.191151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.191183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.191703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.191718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.192115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.192145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.192694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.192724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.193325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.193356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.193974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.194003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.194514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.194546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.103 [2024-07-25 12:13:12.195106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.103 [2024-07-25 12:13:12.195137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.103 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.195692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.195721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.196309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.196340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.196899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.196928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.197436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.197476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.198022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.198060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.198625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.198656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.199237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.199268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.199789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.199819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.200395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.200410] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.200952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.200983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.201548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.201579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.202139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.202170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.202709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.202739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.203298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.203329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.203906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.203936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.204475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.204506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.205072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.205105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.205655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.205692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.206261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.206292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.206847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.206877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.207472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.207503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.208115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.208144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.208695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.208725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.209300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.209331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.209908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.209937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.210541] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.210555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.211074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.211105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.211702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.211733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.212330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.212360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.212858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.212888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.213461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.213493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.213967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.213997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.214534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.214565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.215147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.104 [2024-07-25 12:13:12.215179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.104 qpair failed and we were unable to recover it. 00:27:25.104 [2024-07-25 12:13:12.215759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.215789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.216330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.216374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.216903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.216933] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.217487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.217518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.218100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.218131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.218695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.218724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.219300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.219330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.219903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.219934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.220496] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.220527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.221096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.221127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.221692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.221727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.222220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.222251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.222802] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.222831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.223429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.223460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.224037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.224076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.224677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.224707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.225310] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.225341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.225829] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.225858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.226338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.226369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.226846] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.226875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.227423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.227454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.228006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.228035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.228673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.228703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.229248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.229279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.229838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.229869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.230356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.230387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.230870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.230899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.231380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.231411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.231953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.231983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.232581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.232613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.233189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.233220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.233817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.233848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.234449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.234479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.235084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.105 [2024-07-25 12:13:12.235115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.105 qpair failed and we were unable to recover it. 00:27:25.105 [2024-07-25 12:13:12.235707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.235737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.236314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.236345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.236871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.236901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.237474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.237505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.238057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.238088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.238636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.238666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.239236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.239268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.239730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.239759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.240303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.240334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.240935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.240964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.241560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.241590] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.242203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.242234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.242808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.242822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.243327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.243342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.243864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.243893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.244394] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.244426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.244956] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.244985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.245560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.245597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.246096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.246127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.246659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.246688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.247189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.247221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.247647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.247685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.248195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.248226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.248801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.248831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.249429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.249460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.250065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.250095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.250671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.250702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.251172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.251204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.251760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.251789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.252376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.252408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.252988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.253017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.253607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.253637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.254203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.254234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.254711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.254740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.255268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.255284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.255808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.255837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.256429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.106 [2024-07-25 12:13:12.256461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.106 qpair failed and we were unable to recover it. 00:27:25.106 [2024-07-25 12:13:12.257067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.257099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.257673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.257703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.258306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.258337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.258923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.258953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.259509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.259540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.260092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.260123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.260674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.260689] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.261135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.261172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.261657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.261686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.262217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.262248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.262783] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.262813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.263389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.263420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.263997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.264026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.264618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.264649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.265252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.265282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.265830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.265844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.266384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.266415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.266969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.266999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.267570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.267601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.268131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.268163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.268751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.268780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.269359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.269390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.269882] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.269912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.270488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.270519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.271093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.271124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.271636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.271665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.272220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.272251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.272825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.272855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.273425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.273455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.273930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.273960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.274519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.274550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.275111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.275143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.275700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.275731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.276289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.276320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.276735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.276770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.277250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.277282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.277781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.277811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.278366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.278397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.107 qpair failed and we were unable to recover it. 00:27:25.107 [2024-07-25 12:13:12.278954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.107 [2024-07-25 12:13:12.278984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.279583] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.279614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.280199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.280230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.280771] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.280800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.281402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.281432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.282015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.282061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.282661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.282691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.283296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.283327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.283872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.283901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.284392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.284423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.284990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.285020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.285527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.285558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.286115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.286145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.286717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.286747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.287327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.287359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.287943] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.287973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.288568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.288600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.289092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.289123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.289690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.289719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.290295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.290326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.290902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.290932] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.291487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.291519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.292066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.292097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.292656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.292687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.293277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.293292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.293727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.293741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.294263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.294296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.294862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.294893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.295436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.295471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.296031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.296078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.296656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.296688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.297246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.297278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.297848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.108 [2024-07-25 12:13:12.297865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.108 qpair failed and we were unable to recover it. 00:27:25.108 [2024-07-25 12:13:12.298363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.298394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.298970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.298985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.299531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.299563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.300147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.300177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.300844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.300875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.301431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.301494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.302091] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.302140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.302699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.302750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.303281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.303341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.303866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.303918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.304482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.304543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.305060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.305101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.305620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.305672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.306187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.306204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.306740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.306788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.307275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.307312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.307895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.307973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.308567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.308616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.309225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.309276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.309859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.309892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.310355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.310388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.310909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.310943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.311516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.311541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.312060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.312112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.312718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.312751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.313234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.313265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.313740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.313788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.314373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.314424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.315004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.315067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.315676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.315746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.316265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.316316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.316906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.316963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.317545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.317568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.317992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.318040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.318540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.318578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.319084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.319118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.319702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.319734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.109 [2024-07-25 12:13:12.320296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.109 [2024-07-25 12:13:12.320314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.109 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.320826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.320842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.321367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.321398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.321913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.321927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.322389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.322405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.322971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.323000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.323574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.323605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.324098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.324130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.324595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.324627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.325148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.325179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.325656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.325686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.326165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.326180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.326701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.326734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.327312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.327343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.327866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.327881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.328403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.328418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.328969] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.328983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.329425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.329456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.329925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.329954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.330429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.330460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.331019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.331034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.331629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.331647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.332196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.332211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.332717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.332732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.333202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.333216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.333768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.333782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.110 [2024-07-25 12:13:12.334183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.110 [2024-07-25 12:13:12.334197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.110 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.334971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.335005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.335619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.335651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.336178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.336212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.336754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.336785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.337382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.337397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.337890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.337919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.338400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.338432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.338989] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.339018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.339606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.339639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.340191] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.340206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.340754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.340784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.341341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.341372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.341921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.341951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.342574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.342605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.343185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.343200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.343591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.343620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.344165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.344197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.344809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.344839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.345396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.345427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.345900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.345930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.346488] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.346518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.347112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.347143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.347663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.347694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.348249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.348280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.348866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.348896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.349480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.349512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.350087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.378 [2024-07-25 12:13:12.350117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.378 qpair failed and we were unable to recover it. 00:27:25.378 [2024-07-25 12:13:12.350669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.350698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.351251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.351283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.351792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.351821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.352371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.352402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.352888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.352919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.353389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.353420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.353971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.354001] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.354577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.354608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.355168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.355200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.355759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.355789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.356371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.356403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.356913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.356943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.357474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.357505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.358068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.358099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.358668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.358697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.359179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.359211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.359763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.359793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.360366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.360398] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.360975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.361004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.361584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.361599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.362058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.362089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.362606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.362637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.363195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.363210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.363691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.363706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.364281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.364312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.364836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.364866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.365415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.365447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.366055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.366085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.366565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.366579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.367023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.367068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.367486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.367516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.368065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.368097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.368714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.368743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.369326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.369357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.369937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.369968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.370516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.370533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.371038] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.371077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.379 [2024-07-25 12:13:12.371660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.379 [2024-07-25 12:13:12.371690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.379 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.372287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.372319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.372916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.372945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.373540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.373572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.374149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.374180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.374715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.374745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.375325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.375357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.375909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.375939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.376509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.376540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.377142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.377172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.377661] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.377690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.378268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.378299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.378818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.378848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.379390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.379420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.380015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.380067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.380558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.380589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.381143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.381174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.381716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.381746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.382306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.382338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.382907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.382936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.383528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.383560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.384193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.384230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.384816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.384847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.385445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.385477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.386001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.386032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.386602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.386639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.387181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.387212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.387770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.387800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.388383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.388415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.388911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.388941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.389501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.389532] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.390110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.390141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.390664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.390695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.391250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.391280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.391837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.391867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.392429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.392465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.393030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.393072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.380 qpair failed and we were unable to recover it. 00:27:25.380 [2024-07-25 12:13:12.393673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.380 [2024-07-25 12:13:12.393704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.394311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.394326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.394779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.394809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.395381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.395413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.396005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.396053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.396564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.396579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.397019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.397057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.397601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.397631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.398117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.398149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.398705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.398735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.399237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.399268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.399801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.399831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.400389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.400421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.400985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.401016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.401608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.401639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.402214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.402256] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.402734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.402764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.403325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.403356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.403935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.403965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.404562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.404593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.405197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.405212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.405719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.405749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.406276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.406307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.406855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.406885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.407459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.407490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.407987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.408016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.408579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.408610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.409161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.409192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.409664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.409694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.410226] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.410241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.410738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.410767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.411254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.411285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.411843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.411873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.412474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.412505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.413078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.413122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.413705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.413735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.414328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.381 [2024-07-25 12:13:12.414358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.381 qpair failed and we were unable to recover it. 00:27:25.381 [2024-07-25 12:13:12.414935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.414965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.415536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.415568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.416165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.416196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.416793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.416823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.417427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.417458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.418060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.418090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.418666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.418697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.419190] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.419222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.419777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.419807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.420301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.420331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.420884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.420914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.421522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.421552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.422089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.422120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.422674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.422704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.423301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.423332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.423861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.423890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.424447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.424479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.424964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.424994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.425573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.425604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.426187] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.426218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.426821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.426851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.427454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.427486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.428030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.428071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.428628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.428658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.429236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.429268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.429869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.429898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.430472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.430487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.431011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.431041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.431652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.431683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.432258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.432290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.432795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.432826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.433373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.433404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.433986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.382 [2024-07-25 12:13:12.434016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.382 qpair failed and we were unable to recover it. 00:27:25.382 [2024-07-25 12:13:12.434524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.434555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.435106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.435138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.435733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.435763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.436268] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.436300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.436877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.436907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.437414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.437445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.438002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.438032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.438629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.438659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.439236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.439267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.439868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.439898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.440507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.440539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.441116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.441147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.441726] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.441756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.442298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.442335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.442931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.442968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.443406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.443421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.443957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.443971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.444540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.444572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.445069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.445100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.445654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.445684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.446273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.446287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.446803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.446833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.447408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.447440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.448024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.448065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.448545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.448575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.449129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.449160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.449752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.449782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.450287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.450302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.450818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.450848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.451399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.451430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.451988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.452018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.452615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.452646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.453221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.453252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.383 qpair failed and we were unable to recover it. 00:27:25.383 [2024-07-25 12:13:12.453851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.383 [2024-07-25 12:13:12.453882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.454499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.454529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.455114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.455146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.455741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.455771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.456320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.456335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.456844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.456873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.457456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.457488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.458052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.458089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.458584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.458615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.459156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.459187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.459786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.459816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.460437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.460468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.461021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.461063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.461618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.461648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.462242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.462257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.462843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.462873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.463432] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.463464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.464033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.464073] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.464622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.464653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.465206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.465237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.465800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.465830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.466332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.466347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.466791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.466806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.467324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.467355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.467798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.467837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.468303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.468335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.468911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.468941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.469513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.469544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.470125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.470156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.470712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.470743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.471292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.471323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.471841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.471872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.472424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.472455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.472928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.472957] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.473453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.473484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.473992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.474022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.474526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.474556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.475112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.384 [2024-07-25 12:13:12.475144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.384 qpair failed and we were unable to recover it. 00:27:25.384 [2024-07-25 12:13:12.475697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.475727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.476325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.476356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.476948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.476978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.477573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.477605] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.478198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.478213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.478692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.478722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.479270] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.479302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.479899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.479930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.480455] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.480486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.481069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.481100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.481669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.481701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.482286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.482302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.482814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.482846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.483440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.483472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.483964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.483994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.484574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.484607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.485137] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.485169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.485699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.485729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.486305] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.486337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.486856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.486886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.487436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.487468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.487964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.487993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.488465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.488496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.488998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.489028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.489603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.489635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.490149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.490164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.490709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.490739] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.491316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.491348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.491834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.491863] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.492392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.492423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.493004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.493034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.493608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.493639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.494234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.494265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.494852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.494882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.495351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.495381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.495937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.495966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.496517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.385 [2024-07-25 12:13:12.496549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.385 qpair failed and we were unable to recover it. 00:27:25.385 [2024-07-25 12:13:12.497154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.497192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.497774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.497804] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.498404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.498435] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.499010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.499040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.499527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.499558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.499968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.499998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.500387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.500419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.500935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.500966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.501502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.501533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.502085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.502116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.502870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.502903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.503439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.503454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.503957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.503987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.504421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.504452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.505013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.505055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.505536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.505567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.506054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.506086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.506454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.506484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.506955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.506986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.507487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.507518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.508072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.508103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.508655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.508686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.509263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.509294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.509893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.509924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.510433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.510447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.510912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.510942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.511447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.511478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.512066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.512104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.512689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.512720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.513257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.513289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.513861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.513875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.514413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.514445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.514962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.514992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.515557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.515589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.516175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.516207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.517031] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.517076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.517639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.517654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.518128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.386 [2024-07-25 12:13:12.518160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.386 qpair failed and we were unable to recover it. 00:27:25.386 [2024-07-25 12:13:12.518721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.518753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.519213] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.519244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.519800] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.519830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.520380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.520412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.520917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.520947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.521445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.521476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.521977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.522009] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.522540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.522571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.523129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.523161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.523713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.523744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.524323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.524353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.524931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.524961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.525471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.525502] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.525825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.525854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.526401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.526432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.527027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.527071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.527644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.527681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.528244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.528291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.528899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.528929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.529520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.529551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.530148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.530180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.530770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.530800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.531343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.531375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.531933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.531963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.532563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.532595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.533154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.533169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.387 [2024-07-25 12:13:12.533679] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.387 [2024-07-25 12:13:12.533709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.387 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.534311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.534341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.534884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.534914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.535409] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.535440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.536035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.536078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.536569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.536583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.537039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.537062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.537569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.537598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.538157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.538188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.538777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.538807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.539538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.539572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.540146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.540177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.540667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.540681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.541192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.541207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.541763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.541792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.542392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.542407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.542838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.542867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.543435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.543476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.543837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.543852] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.544289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.544321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.544856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.544887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.545478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.545509] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.545985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.545999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.546500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.546531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.547065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.547081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.547585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.547616] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.548218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.548251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.548789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.548839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.549421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.549471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.550011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.550093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.550558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.550593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.551086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.551138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.551696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.551744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.552399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.552450] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.553078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.553148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.553752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.553817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.554647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.554681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.555247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.555298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.388 qpair failed and we were unable to recover it. 00:27:25.388 [2024-07-25 12:13:12.555816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.388 [2024-07-25 12:13:12.555868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.556313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.556383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.556937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.556976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.557507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.557522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.557950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.557965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.558366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.558381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.558884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.558898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.559277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.559292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.559777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.559808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.560300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.560331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.560799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.560829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.561303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.561334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.561878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.561910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.562438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.562469] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.563024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.563078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.563525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.563557] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.564138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.564171] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.564698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.564712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.565155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.565173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.565554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.565569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.565958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.565994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.566545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.566560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.567074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.567106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.567616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.567650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.568173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.568207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.568622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.568655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.569061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.569094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.569534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.569567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.570111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.570158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.570660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.570692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.571146] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.571187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.571744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.571774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.572243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.572314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.389 [2024-07-25 12:13:12.572908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.389 [2024-07-25 12:13:12.572968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.389 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.573587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.573638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.574216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.574271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.574850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.574882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.575344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.575376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.575890] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.575906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.576421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.576454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.576955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.576986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.577519] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.577534] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.577986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.578000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.578521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.578554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.579027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.579079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.579604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.579618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.580374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.580408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.580940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.580959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.581330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.581363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.581857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.581887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.582296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.582329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.582816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.582830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.583366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.583381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.583896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.583911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.584354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.584385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.584910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.584940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.585498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.585529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.586074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.586105] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.586677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.586707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.587245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.587276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.587818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.587847] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.588355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.588387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.588910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.588940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.589486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.589516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.590078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.590109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.590602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.590633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.591123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.591154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.591677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.591707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.592249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.592281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.592804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.592834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.390 [2024-07-25 12:13:12.593354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.390 [2024-07-25 12:13:12.593385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.390 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.593961] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.593992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.594515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.594546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.595012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.595053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.595623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.595652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.595929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.595959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.596500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.596531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.597078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.597108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.597635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.597664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.598157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.598188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.598735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.598764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.599235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.599265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.599733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.599764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.600320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.600359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.600876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.600891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.601323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.601338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.601858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.601887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.602300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.602331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.602862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.602892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.603434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.603465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.603937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.603968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.604487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.604518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.605055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.605086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.605648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.605678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.606224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.606253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.606773] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.606787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.607300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.607315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.607830] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.607860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.608441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.608472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.608993] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.609023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.609528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.609559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.609868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.609898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.391 [2024-07-25 12:13:12.610356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.391 [2024-07-25 12:13:12.610387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.391 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.610929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.610958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.611362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.611393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.611876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.611906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.612427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.612459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.612929] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.612960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.613504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.613535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.614005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.614034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.614586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.614617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.615139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.615170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.615622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.615636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.616123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.616137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.616656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.616685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.617090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.617126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.617658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.617688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.618208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.618239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.618716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.618746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.619212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.619243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.619762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.619791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.392 [2024-07-25 12:13:12.620171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.392 [2024-07-25 12:13:12.620203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.392 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.620742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.620774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.621294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.621325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.621786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.621816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.622352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.622368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.622803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.622817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.623202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.623233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.623697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.623727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.624198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.624229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.624677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.624707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.625176] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.625207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.625744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.625774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.626287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.626302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.626789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.626818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.627358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.627389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.627824] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.627854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.628392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.628424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.628942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.628972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.629516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.629547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.630013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.630052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.630516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.630547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.631018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.631068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.631613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.631644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.632128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.632159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.632645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.632676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.633084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.633115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.633652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.633681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.634199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.634229] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.634769] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.634798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.635265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.635295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.635837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.635867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.636325] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.636362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.636713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.636726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.637230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.637244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.664 [2024-07-25 12:13:12.637729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.664 [2024-07-25 12:13:12.637759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.664 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.638127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.638158] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.638629] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.638675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.639108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.639122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.639548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.639578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.640039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.640081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.640529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.640558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.641068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.641082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.641579] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.641609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.642158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.642189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.642670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.642699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.643237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.643267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.643803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.643816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.644317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.644331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.644698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.644734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.645273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.645304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.645750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.645779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.646307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.646337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.646898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.646928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.647422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.647452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.647940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.647970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.648525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.648555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.649079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.649110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.649571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.649611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.650116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.650146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.650689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.650719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.651211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.651242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.651781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.651810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.652156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.652187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.652705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.652735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.653249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.653287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.653798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.653827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.654290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.654321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.654858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.654887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.655377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.655408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.655946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.655975] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.656512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.656543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.657012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.665 [2024-07-25 12:13:12.657041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.665 qpair failed and we were unable to recover it. 00:27:25.665 [2024-07-25 12:13:12.657507] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.657537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.658079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.658111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.658599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.658629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.659166] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.659197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.659738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.659768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.660300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.660331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.660777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.660807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.661300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.661331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.661870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.661899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.662412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.662443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.662955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.662985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.663460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.663490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.664020] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.664059] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.664574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.664603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.665118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.665149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.665668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.665698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.666163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.666193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.666760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.666790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.667253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.667284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.667813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.667826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.668331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.668362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.668899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.668928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.669445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.669476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.669934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.669963] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.670427] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.670458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.670999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.671012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.671512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.671526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.671990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.672020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.672549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.672580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.673118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.673149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.673615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.673644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.674136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.674168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.674701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.674730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.675284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.675315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.675785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.675816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.676328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.676358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.676916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.676945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.666 [2024-07-25 12:13:12.677423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.666 [2024-07-25 12:13:12.677454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.666 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.677913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.677943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.678449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.678464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.678818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.678831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.679200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.679231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.679678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.679708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.680239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.680253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.680669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.680704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.681116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.681147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.681686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.681716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.682188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.682218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.682711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.682740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.683257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.683287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.683744] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.683774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.684245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.684275] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.684723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.684753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.685291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.685322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.685850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.685879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.686408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.686440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.686900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.686930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.687465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.687495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.687968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.687998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.688470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.688501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.688968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.688999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.689256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.689287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.689839] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.689869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.690407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.690438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.690953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.690983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.691165] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.691179] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.691628] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.691658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.692212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.692243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.692764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.692793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.693332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.693363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.693880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.693910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.694388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.694424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.694978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.695008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.695506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.695537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.667 [2024-07-25 12:13:12.695981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.667 [2024-07-25 12:13:12.695995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.667 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.696469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.696483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.696990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.697019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.697538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.697568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.698036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.698078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.698595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.698636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.699123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.699154] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.699504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.699533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.700062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.700077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.700573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.700587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.700981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.701011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.701280] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.701311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.701785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.701814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.702289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.702320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.702780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.702810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.703340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.703354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.703787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.703816] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.704354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.704386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.704877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.704906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.705469] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.705500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.705957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.705987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.706508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.706538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.707100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.707131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.707539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.707568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.708105] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.708137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.708611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.708641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.709098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.709129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.709536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.709566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.710026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.710064] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.710576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.710606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.711128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.711142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.668 [2024-07-25 12:13:12.711599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.668 [2024-07-25 12:13:12.711612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.668 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.712104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.712118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.712596] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.712609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.713090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.713120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.713657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.713686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.714168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.714198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.714675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.714704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.715208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.715239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.715794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.715823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.716346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.716377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.716845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.716874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.717411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.717442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.717931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.717961] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.718447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.718478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.718936] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.718966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.719465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.719495] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.719952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.719982] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.720426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.720456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.720990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.721019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.721446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.721476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.721727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.721757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.722214] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.722228] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.722690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.722703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.723210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.723242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.723772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.723801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.724338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.724368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.724902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.724931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.725440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.725470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.725918] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.725948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.726436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.726467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.726876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.726905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.727369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.727399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.727932] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.727962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.728429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.728459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.728960] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.728995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.729563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.729593] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.730120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.730151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.730697] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.730727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.669 [2024-07-25 12:13:12.731251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.669 [2024-07-25 12:13:12.731281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.669 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.731767] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.731796] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.732334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.732365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.732905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.732935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.733471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.733501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.734036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.734075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.734606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.734635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.734888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.734917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.735452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.735488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.735994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.736023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.736589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.736619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.736907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.736936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.737341] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.737354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.737780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.737793] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.738285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.738315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.738828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.738858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.739385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.739416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.739954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.739984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.740479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.740510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.740973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.741002] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.741460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.741491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.742005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.742036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.742505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.742535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.743005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.743040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.743582] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.743612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.744083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.744114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.744578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.744607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.745124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.745155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.745715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.745744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.746210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.746240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.746775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.746805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.747288] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.747302] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.747754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.747767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.748217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.748249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.748757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.748771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.749275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.749306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.749770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.749809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.750034] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.750055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.670 [2024-07-25 12:13:12.750567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.670 [2024-07-25 12:13:12.750581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.670 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.751014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.751053] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.751564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.751594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.752060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.752092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.752567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.752598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.753066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.753097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.753586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.753617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.754083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.754114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.754587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.754617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.755079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.755109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.755571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.755601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.756113] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.756145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.756655] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.756690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.757252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.757284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.757740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.757769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.758282] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.758296] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.758831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.758860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.759262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.759292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.759808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.759837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.760374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.760405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.760886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.760916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.761309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.761340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.761851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.761881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.762413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.762443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.762930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.762959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.763475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.763506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.764075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.764107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.764592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.764622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.765069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.765101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.765549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.765579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.766036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.766074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.766574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.766603] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.766999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.767028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.767483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.767513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.767974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.768003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.768474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.768505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.769040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.769081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.769594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.769625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.671 [2024-07-25 12:13:12.770087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.671 [2024-07-25 12:13:12.770119] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.671 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.770597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.770626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.771092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.771123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.771530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.771560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.772096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.772127] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.772652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.772681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.773147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.773178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.773646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.773675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.773861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.773890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.774429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.774460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.774937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.774966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.775412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.775443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.775934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.775964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.776376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.776407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.776934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.776964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.777487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.777518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.778030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.778079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.778566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.778596] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.779121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.779152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.779690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.779720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.780228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.780242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.780728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.780742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.781172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.781186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.781692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.781706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.782134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.782164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.782486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.782517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.782988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.783026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.783250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.783265] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.783712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.783742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.784219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.784252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/host/target_disconnect.sh: line 36: 484910 Killed "${NVMF_APP[@]}" "$@" 00:27:25.672 [2024-07-25 12:13:12.784725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.784756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.785168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.785199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 12:13:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@48 -- # disconnect_init 10.0.0.2 00:27:25.672 [2024-07-25 12:13:12.785676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 12:13:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@17 -- # nvmfappstart -m 0xF0 00:27:25.672 [2024-07-25 12:13:12.785707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 12:13:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:27:25.672 [2024-07-25 12:13:12.786179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.786195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 12:13:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:25.672 12:13:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.672 [2024-07-25 12:13:12.786719] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.786750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.787278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.672 [2024-07-25 12:13:12.787292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.672 qpair failed and we were unable to recover it. 00:27:25.672 [2024-07-25 12:13:12.787670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.787715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.788174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.788206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.788701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.788731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.789204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.789220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.789725] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.789738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.790175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.790206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.790666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.790696] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.791155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.791186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.791580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.791608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.792255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.792287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.792753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.792782] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 12:13:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@481 -- # nvmfpid=485631 00:27:25.673 [2024-07-25 12:13:12.793262] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.793295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 12:13:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@482 -- # waitforlisten 485631 00:27:25.673 12:13:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF0 00:27:25.673 [2024-07-25 12:13:12.793793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.793826] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 12:13:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@829 -- # '[' -z 485631 ']' 00:27:25.673 [2024-07-25 12:13:12.794157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.794225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 12:13:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 12:13:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:25.673 [2024-07-25 12:13:12.794681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.794719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 12:13:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:25.673 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:25.673 12:13:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:25.673 [2024-07-25 12:13:12.795287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.795313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 12:13:12 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:25.673 [2024-07-25 12:13:12.795718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.795733] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.795996] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.796020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.796474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.796489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.796937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.796951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.797401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.797415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.797854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.797868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.798309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.798323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.798754] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.798770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.799194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.799208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.799566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.799579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.800003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.800017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.673 qpair failed and we were unable to recover it. 00:27:25.673 [2024-07-25 12:13:12.800406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.673 [2024-07-25 12:13:12.800421] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.800861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.800877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.801344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.801358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.801731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.801744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.802247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.802261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.802788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.802802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.803309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.803323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.803740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.803754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.804198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.804212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.804524] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.804538] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.805023] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.805037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.805410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.805423] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.805592] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.805608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.806027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.806041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.806470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.806485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.806966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.806980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.807407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.807420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.807912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.807926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.808289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.808305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.808542] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.808555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.808983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.808997] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.809785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.809800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.810306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.810320] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.810820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.810834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.811278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.811293] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.811776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.811791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.812174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.812188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.812405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.812419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.813399] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.813422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.813857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.813871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.814236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.814251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.814676] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.814690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.815173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.815187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.815544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.815558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.815970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.815983] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.816380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.674 [2024-07-25 12:13:12.816404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.674 qpair failed and we were unable to recover it. 00:27:25.674 [2024-07-25 12:13:12.816899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.816921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.817100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.817115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.817591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.817615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.817922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.817944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.818209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.818234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.818619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.818642] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.819070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.819100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.819482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.819497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.819930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.819953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.820396] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.820420] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.820844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.820859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.821236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.821260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.821732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.821746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.822246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.822270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.822677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.822691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.823377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.823393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.823874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.823891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.824286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.824309] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.824683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.824697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.825078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.825092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.825534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.825561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.825926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.825940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.826369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.826384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.826797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.826811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.827289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.827303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.827522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.827537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.827894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.827907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.828334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.828349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.828763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.828777] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.829194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.829210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.829566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.829580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.830008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.830021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.830529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.830543] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.830910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.830923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.831362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.831377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.831837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.831851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.832078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.832092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.832594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.675 [2024-07-25 12:13:12.832607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.675 qpair failed and we were unable to recover it. 00:27:25.675 [2024-07-25 12:13:12.833087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.833101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.833445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.833459] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.833883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.833896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.834254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.834268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.834682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.834695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.835121] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.835136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.835615] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.835632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.836014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.836027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.836390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.836404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.836834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.836848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.837011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.837024] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.837482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.837496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.837914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.837927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.838245] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.838259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.838625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.838638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.839118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.839133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.839561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.839575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.840332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.840347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.840852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.840865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.841307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.841321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.841801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.841815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.841891] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:27:25.676 [2024-07-25 12:13:12.841928] [ DPDK EAL parameters: nvmf -c 0xF0 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:27:25.676 [2024-07-25 12:13:12.842230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.842245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.842722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.842735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.843438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.843454] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.843957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.843984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.844421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.844436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.844917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.844931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.845373] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.845387] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.845868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.845881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.846383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.846397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.846912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.846925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.847453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.847467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.847852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.847865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.848366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.848379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.848810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.848824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.849330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.849344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.849794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.849807] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.676 qpair failed and we were unable to recover it. 00:27:25.676 [2024-07-25 12:13:12.850240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.676 [2024-07-25 12:13:12.850254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.850639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.850652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.851131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.851145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.851571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.851584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.852047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.852062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.852572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.852585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.853013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.853026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.853452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.853465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.853899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.853913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.854393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.854407] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.854832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.854845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.855269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.855283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.855768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.855781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.856203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.856217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.856696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.856710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.857148] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.857162] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.857576] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.857589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.858095] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.858118] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.858563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.858577] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.858992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.859005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.859446] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.859461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.859811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.859825] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.860264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.860287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.860713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.860727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.861204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.861218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.861635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.861648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.862111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.862124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.862630] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.862643] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.863073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.863086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.863445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.863458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.863831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.863844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.864223] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.864237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.864740] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.864754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.865193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.865207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.865669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.865682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.866164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.866178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.866607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.866620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.867128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.867143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.677 qpair failed and we were unable to recover it. 00:27:25.677 [2024-07-25 12:13:12.867467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.677 [2024-07-25 12:13:12.867481] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.867922] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.867936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.868389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.868403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 EAL: No free 2048 kB hugepages reported on node 1 00:27:25.678 [2024-07-25 12:13:12.868888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.868901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.869417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.869432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.869854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.869868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.870347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.870360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.870775] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.870788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.871008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.871022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.871512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.871526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.871974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.871988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.872493] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.872510] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.873021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.873034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.873417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.873430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.873933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.873946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.874375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.874389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.874816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.874830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.875266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.875280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.875708] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.875721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.876150] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.876164] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.876642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.876656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.877159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.877172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.877545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.877558] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.878061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.878076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.878561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.878575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.878971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.878984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.879463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.879477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.879914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.879928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.880354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.880369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.880847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.880861] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.881354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.881368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.881874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.881888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.678 [2024-07-25 12:13:12.882326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.678 [2024-07-25 12:13:12.882341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.678 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.882718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.882731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.883168] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.883183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.883502] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.883516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.883894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.883908] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.884388] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.884403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.884859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.884876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.885249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.885262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.885711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.885725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.886203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.886217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.886656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.886669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.887081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.887095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.887459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.887472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.887856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.887869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.888371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.888386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.888871] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.888886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.889321] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.889335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.889584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.889598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.890025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.890039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.890512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.890526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.890958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.890971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.891452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.891465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.891912] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.891926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.892435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.892449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.892880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.892893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.893206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.893221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.893753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.893767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.894142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.894156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.894580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.894594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.895094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.895110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.895547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.895560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.895974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.895987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.896413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.896427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.896880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.896893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.897067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.897081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.897331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.897345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.897825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.897839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.898206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.679 [2024-07-25 12:13:12.898219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.679 qpair failed and we were unable to recover it. 00:27:25.679 [2024-07-25 12:13:12.898649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.680 [2024-07-25 12:13:12.898663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.680 qpair failed and we were unable to recover it. 00:27:25.680 [2024-07-25 12:13:12.899088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.680 [2024-07-25 12:13:12.899102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.680 qpair failed and we were unable to recover it. 00:27:25.680 [2024-07-25 12:13:12.899587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.680 [2024-07-25 12:13:12.899601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.680 qpair failed and we were unable to recover it. 00:27:25.680 [2024-07-25 12:13:12.900069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.680 [2024-07-25 12:13:12.900083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.680 qpair failed and we were unable to recover it. 00:27:25.680 [2024-07-25 12:13:12.900505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.680 [2024-07-25 12:13:12.900518] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.680 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.900878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.900892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.901276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.901291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.901826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.901840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.902164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.902178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.902564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.902579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.902957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.902970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.903413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.903428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.903907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.903921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.904342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.904356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.904844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.904859] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.905326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.905339] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.905716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.905730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.906175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.906190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.906567] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.906581] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.906945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.906959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.907473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.907488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.908007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.908020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.908257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.908271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.908646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.908660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.909370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.909386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.909865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.966 [2024-07-25 12:13:12.909879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.966 qpair failed and we were unable to recover it. 00:27:25.966 [2024-07-25 12:13:12.910258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.910272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.910690] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.910703] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.911156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.911170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.911205] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:27:25.967 [2024-07-25 12:13:12.911606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.911620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.912050] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.912065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.912547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.912561] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.912916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.912930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.913430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.913444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.913925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.913939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.914309] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.914323] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.914561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.914575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.915083] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.915097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.915603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.915617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.916068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.916083] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.916448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.916462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.916977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.916992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.917366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.917381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.917865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.917879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.918307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.918321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.918753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.918768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.918924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.918938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.919354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.919370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.919737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.919753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.920156] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.920174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.920548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.920565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.921068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.921084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.967 qpair failed and we were unable to recover it. 00:27:25.967 [2024-07-25 12:13:12.921512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.967 [2024-07-25 12:13:12.921526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.921778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.921792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.922221] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.922235] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.922562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.922576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.922945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.922959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.923438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.923453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.923935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.923949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.924307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.924321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.924640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.924653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.925087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.925101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.925537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.925551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.925978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.925992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.926472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.926486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.926962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.926976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.927338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.927352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.927776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.927789] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.928291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.928305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.928730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.928744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.929108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.929123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.929546] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.929560] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.929975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.929988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.930471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.930486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.930756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.930770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.931093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.931107] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.931478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.931492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.931995] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.932008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.932445] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.932460] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.968 qpair failed and we were unable to recover it. 00:27:25.968 [2024-07-25 12:13:12.932940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.968 [2024-07-25 12:13:12.932954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.933453] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.933467] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.933889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.933903] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.934331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.934346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.934647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.934661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.935087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.935102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.935515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.935529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.935962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.935976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.936335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.936349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.936863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.936877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.937379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.937393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.937827] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.937842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.938225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.938240] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.938717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.938731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.939233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.939248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.939700] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.939714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.940174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.940188] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.940646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.940659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.941088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.941103] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.941528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.941542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.942047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.942062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.942569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.942584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.943089] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.943104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.943515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.943529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.944008] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.944022] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.944508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.944523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.944951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.969 [2024-07-25 12:13:12.944965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.969 qpair failed and we were unable to recover it. 00:27:25.969 [2024-07-25 12:13:12.945323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.945338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.945816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.945830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.946243] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.946258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.946611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.946625] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.947051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.947066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.947568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.947584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.948037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.948063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.948566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.948586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.949014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.949035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.949555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.949576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.950061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.950076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.950528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.950552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.951081] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.951098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.951475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.951491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.951928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.951942] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.952449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.952465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.952946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.952962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.953180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.953195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.953622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.953638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.954130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.954145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.954572] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.954588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.955093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.955109] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.955639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.955654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.955874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.955888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.956389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.956405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.956889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.956904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.957066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.970 [2024-07-25 12:13:12.957081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.970 qpair failed and we were unable to recover it. 00:27:25.970 [2024-07-25 12:13:12.957444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.957457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.957959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.957973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.958347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.958361] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.958789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.958803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.959238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.959252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.959709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.959723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.960180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.960193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.960674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.960687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.961097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.961111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.961593] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.961607] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.962085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.962099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.962601] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.962617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.963117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.963131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.963565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.963578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.963957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.963970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.964390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.964403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.964880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.964893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.965328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.965343] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.965843] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.965856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.966336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.966351] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.966501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.966514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.966972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.966985] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.967340] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.967354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.967791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.967805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.968230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.968244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.968669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.971 [2024-07-25 12:13:12.968683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.971 qpair failed and we were unable to recover it. 00:27:25.971 [2024-07-25 12:13:12.969107] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.969121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.969604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.969618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.970066] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.970080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.970492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.970505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.970985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.970998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.971479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.971492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.971905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.971918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.972132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.972146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.972529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.972542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.973064] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.973079] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.973556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.973569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.974056] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.974070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.974513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.974527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.974950] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.974964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.975410] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.975424] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.975877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.975890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.976392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.976406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.976715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.976728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.977162] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.972 [2024-07-25 12:13:12.977175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.972 qpair failed and we were unable to recover it. 00:27:25.972 [2024-07-25 12:13:12.977640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.977654] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.978149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.978163] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.978608] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.978622] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.979053] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.979067] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.979506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.979519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.979997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.980010] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.980425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.980439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.980921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.980935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.981356] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.981370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.981814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.981827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.982252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.982266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.982694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.982709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.983188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.983202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.983710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.983723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.984152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.984166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.984590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.984604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.985019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.985034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.985540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.985556] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.985930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.985945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.986422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.986436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.986481] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:27:25.973 [2024-07-25 12:13:12.986513] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:27:25.973 [2024-07-25 12:13:12.986523] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:27:25.973 [2024-07-25 12:13:12.986529] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:27:25.973 [2024-07-25 12:13:12.986534] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:27:25.973 [2024-07-25 12:13:12.986645] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 5 00:27:25.973 [2024-07-25 12:13:12.986752] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 6 00:27:25.973 [2024-07-25 12:13:12.986857] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 4 00:27:25.973 [2024-07-25 12:13:12.986916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.986930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.986858] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 7 00:27:25.973 [2024-07-25 12:13:12.987411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.987426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.987858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.987871] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.973 qpair failed and we were unable to recover it. 00:27:25.973 [2024-07-25 12:13:12.988297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.973 [2024-07-25 12:13:12.988310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.988814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.988828] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.989269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.989283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.989786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.989800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.990180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.990194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.990537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.990551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.991055] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.991068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.991492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.991506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.991939] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.991953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.992380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.992395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.992884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.992898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.993235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.993249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.993753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.993768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.994231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.994246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.994748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.994762] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.995136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.995151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.995644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.995658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.996080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.996095] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.996474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.996488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.996897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.996911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.997370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.997385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.997806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.997821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.998244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.998259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.998760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.998775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.999254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.999269] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:12.999644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:12.999659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:13.000111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.974 [2024-07-25 12:13:13.000126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.974 qpair failed and we were unable to recover it. 00:27:25.974 [2024-07-25 12:13:13.000633] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.000647] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.001073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.001089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.001618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.001633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.002027] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.002049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.002552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.002568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.002990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.003005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.003428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.003445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.003873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.003888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.004385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.004426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.004917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.004929] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.005407] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.005418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.005836] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.005846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.006212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.006222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.006644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.006653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.007090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.007100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.007530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.007540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.008033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.008057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.008480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.008492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.008933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.008946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.009390] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.009401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.009897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.009910] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.010301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.010318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.010819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.010832] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.011274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.011286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.011732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.011743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.012111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.975 [2024-07-25 12:13:13.012124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.975 qpair failed and we were unable to recover it. 00:27:25.975 [2024-07-25 12:13:13.012544] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.012555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.012909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.012920] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.013358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.013369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.013733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.013744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.014236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.014246] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.014742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.014752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.015167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.015178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.015663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.015672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.016125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.016136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.016557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.016568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.016974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.016984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.017475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.017487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.017911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.017921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.018079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.018089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.018504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.018515] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.019010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.019021] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.019471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.019483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.019955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.019966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.020459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.020473] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.020994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.021006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.021428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.021441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.021914] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.021928] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.022189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.022218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.022673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.022690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.023157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.023174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.023658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.023672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.976 [2024-07-25 12:13:13.024119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.976 [2024-07-25 12:13:13.024135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.976 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.024614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.024627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.025109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.025124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.025635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.025649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.026033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.026051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.026554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.026569] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.027011] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.027026] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.027568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.027585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.027808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.027822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.028322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.028338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.028774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.028788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.029293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.029308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.029742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.029757] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.030199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.030215] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.030722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.030741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.031250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.031270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.031718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.031737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.032167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.032183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.032692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.032709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.033192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.033207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.033658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.033671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.034151] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.034165] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.034591] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.034604] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.035033] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.035054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.035553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.035566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.035978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.035992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.036439] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.977 [2024-07-25 12:13:13.036453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.977 qpair failed and we were unable to recover it. 00:27:25.977 [2024-07-25 12:13:13.036954] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.036968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.037447] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.037462] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.037958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.037972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.038452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.038466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.038972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.038986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.039430] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.039445] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.039863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.039878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.040251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.040266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.040699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.040714] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.041145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.041160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.041666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.041679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.042116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.042129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.042569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.042582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.043088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.043102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.043532] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.043545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.044054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.044069] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.044513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.044541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.045069] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.045093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.045588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.045612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.046093] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.046123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.046614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.046636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.047152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.047175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.047610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.047632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.048135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.048187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.048718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.048741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.049180] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.049194] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.978 [2024-07-25 12:13:13.049652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.978 [2024-07-25 12:13:13.049666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.978 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.050099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.050114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.050636] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.050650] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.051080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.051094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.051452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.051465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.051949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.051972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.052411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.052434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.052971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.052994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.053365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.053389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.053832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.053846] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.054327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.054352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.054881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.054904] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.055357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.055380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.055794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.055818] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.056264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.056287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.056797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.056820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.057250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.057266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.057577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.057591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.058036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.058056] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.058550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.058563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.058986] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.059000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.979 [2024-07-25 12:13:13.059426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.979 [2024-07-25 12:13:13.059440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.979 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.059967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.059980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.060425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.060438] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.060811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.060829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.061276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.061291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.061721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.061735] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.062212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.062226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.062639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.062653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.063136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.063150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.063581] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.063595] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.064098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.064112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.064539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.064552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.064981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.064995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.065441] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.065455] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.065898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.065911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.066376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.066390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.066902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.066916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.067345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.067359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.067785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.067798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.068181] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.068195] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.068714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.068728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.069075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.069089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.069835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.069850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.070354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.070368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.070879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.070892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.071364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.071388] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.071807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.980 [2024-07-25 12:13:13.071821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.980 qpair failed and we were unable to recover it. 00:27:25.980 [2024-07-25 12:13:13.072244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.072258] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.072685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.072708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.073143] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.073166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.073367] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.073381] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.073907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.073930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.074389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.074413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.074924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.074947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.075457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.075490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.075955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.075971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.076389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.076413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.076857] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.076880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.077317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.077340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.077854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.077876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.078320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.078353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.078729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.078744] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.079127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.079151] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.079589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.079611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.079875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.079893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.080344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.080376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.080760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.080785] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.081330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.081354] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.081822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.081844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.082329] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.082353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.082866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.082888] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.083347] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.083370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.083792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.083806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.981 [2024-07-25 12:13:13.084289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.981 [2024-07-25 12:13:13.084313] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.981 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.084844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.084858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.085301] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.085315] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.085738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.085753] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.086231] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.086245] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.086669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.086682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.087163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.087177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.087613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.087627] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.088090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.088104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.088531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.088545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.088968] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.088991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.089513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.089536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.089977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.090000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.090487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.090503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.090953] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.090976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.091492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.091516] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.091877] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.091899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.092350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.092365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.092527] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.092554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.092983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.093006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.093531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.093555] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.094039] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.094061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.094491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.094506] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.094891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.094905] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.095326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.095340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.095755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.095768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.096128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.982 [2024-07-25 12:13:13.096142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.982 qpair failed and we were unable to recover it. 00:27:25.982 [2024-07-25 12:13:13.096540] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.096554] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.097059] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.097074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.097501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.097514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.097948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.097962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.098182] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.098196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.098425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.098439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.098801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.098814] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.099238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.099252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.099757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.099771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.100183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.100197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.100605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.100619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.101119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.101133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.101634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.101648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.102127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.102140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.102561] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.102575] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.102987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.103000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.103426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.103440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.103868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.103881] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.104242] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.104259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.104415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.104428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.104910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.104923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.105402] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.105416] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.105864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.105877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.106306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.106319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.106709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.106722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.107199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.107212] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.107706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.983 [2024-07-25 12:13:13.107719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.983 qpair failed and we were unable to recover it. 00:27:25.983 [2024-07-25 12:13:13.108086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.108099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.108557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.108571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.109072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.109086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.109597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.109610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.110062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.110076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.110511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.110524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.110887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.110900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.111379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.111392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.111814] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.111827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.112331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.112345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.112703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.112716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.113141] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.113155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.113663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.113677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.114094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.114108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.114522] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.114536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.114958] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.114971] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.115474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.115488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.115992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.116006] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.116433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.116447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.116883] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.116896] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.117338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.117352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.117716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.117729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.118230] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.118244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.118722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.118736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.119235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.119249] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.119673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.984 [2024-07-25 12:13:13.119687] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.984 qpair failed and we were unable to recover it. 00:27:25.984 [2024-07-25 12:13:13.120061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.120076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.120537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.120551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.120966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.120980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.121463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.121477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.121899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.121912] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.122389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.122404] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.122908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.122922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.123400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.123414] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.123869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.123883] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.124336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.124349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.124779] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.124792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.125297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.125312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.125736] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.125749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.126198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.126211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.126688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.126702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.127126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.127140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.127638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.127651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.128157] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.128170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.128595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.128608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.129070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.129084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.129564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.129578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.129937] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.129950] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.130370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.130383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.130797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.130811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.131312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.131325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.131768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.131781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.985 qpair failed and we were unable to recover it. 00:27:25.985 [2024-07-25 12:13:13.132312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.985 [2024-07-25 12:13:13.132325] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.132811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.132824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.133253] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.133267] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.133694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.133707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.134117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.134131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.134555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.134568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.134938] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.134951] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.135393] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.135408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.135862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.135876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.136032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.136058] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.136486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.136499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.137006] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.137020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.137389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.137403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.137844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.137858] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.138285] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.138299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.138720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.138734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.139160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.139174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.139898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.139913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.140434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.140447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.140905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.140918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.141557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.141571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.142007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.986 [2024-07-25 12:13:13.142020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.986 qpair failed and we were unable to recover it. 00:27:25.986 [2024-07-25 12:13:13.142454] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.142468] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.142885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.142899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.143375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.143390] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.143705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.143718] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.144154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.144168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.144428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.144441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.144864] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.144877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.145358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.145372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.145741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.145754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.146172] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.146185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.146685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.146698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.147179] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.147193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.147613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.147629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.147909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.147922] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.148400] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.148413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.148850] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.148864] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.149371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.149385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.149862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.149876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.150353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.150371] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.150821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.150836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.151336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.151350] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.151776] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.151790] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.152281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.152303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.152687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.152701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.153079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.153093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.153569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.153582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.987 qpair failed and we were unable to recover it. 00:27:25.987 [2024-07-25 12:13:13.153965] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.987 [2024-07-25 12:13:13.153979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.154467] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.154480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.154981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.154995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.155483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.155497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.156152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.156166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.156603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.156617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.157119] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.157134] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.157566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.157580] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.157997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.158011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.158534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.158549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.159026] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.159040] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.159475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.159489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.159966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.159980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.160411] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.160430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.160875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.160889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.161259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.161273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.161788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.161802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.162249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.162263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.162649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.162662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.163086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.163100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.163599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.163612] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.164117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.164131] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.164605] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.164618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.165063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.165077] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.165584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.165597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.165966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.165979] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.988 [2024-07-25 12:13:13.166198] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.988 [2024-07-25 12:13:13.166211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.988 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.166671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.166685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.166933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.166946] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.167369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.167383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.167762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.167775] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.168317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.168331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.168709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.168723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.169155] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.169169] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.169587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.169601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.170084] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.170098] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.170510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.170524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.170948] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.170962] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.171376] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.171389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.171603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.171617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.172041] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.172061] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.172431] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.172444] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.172873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.172887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.173314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.173328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.173752] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.173766] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.173925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.173939] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.174456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.174470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.174904] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.174917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.175343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.175358] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.175792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.175806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.176167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.176180] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.176606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.176620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.176982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.176995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.177412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.177426] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.177797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.177813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.178101] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.178115] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.178558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.178572] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.178735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.178748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.179106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.179120] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.989 qpair failed and we were unable to recover it. 00:27:25.989 [2024-07-25 12:13:13.179538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.989 [2024-07-25 12:13:13.179551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.179976] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.179989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.180417] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.180431] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.180648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.180662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.181142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.181157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.181652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.181665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.182025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.182039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.182470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.182484] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.182863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.182877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.183307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.183322] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.183817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.183831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.184269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.184284] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.184696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.184710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.185154] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.185168] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.185562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.185576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.186015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.186028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.186397] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.186411] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.186786] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.186800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.187239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.187253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.187699] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.187713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.188097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.188111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.188535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.188549] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.189061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.189078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.189509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.189523] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.190013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.190027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.190450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.190464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.190913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.190927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.191296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.191310] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.191671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.191685] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.192136] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.192150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.192656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.192670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.193037] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.193057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.193483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.193496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.193856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.193870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.194306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.194321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.194692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.194707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.195134] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.195150] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.195627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.195641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.990 qpair failed and we were unable to recover it. 00:27:25.990 [2024-07-25 12:13:13.196145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.990 [2024-07-25 12:13:13.196159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.196573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.196587] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.197014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.197028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.197515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.197529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.197897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.197911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.198345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.198359] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.198770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.198784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.199195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.199209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.199657] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.199670] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.200068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.200082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.200269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.200282] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.200764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.200780] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.201219] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.201233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.201587] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.201600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.202080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.202093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.202528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.202542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.202970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.202984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.203395] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.203409] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.203770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.203784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:25.991 [2024-07-25 12:13:13.204225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:25.991 [2024-07-25 12:13:13.204239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:25.991 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-25 12:13:13.204667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-25 12:13:13.204682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-25 12:13:13.205142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-25 12:13:13.205159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-25 12:13:13.205590] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-25 12:13:13.205606] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-25 12:13:13.205973] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-25 12:13:13.205987] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-25 12:13:13.206422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-25 12:13:13.206437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-25 12:13:13.206813] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-25 12:13:13.206827] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-25 12:13:13.207248] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-25 12:13:13.207262] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-25 12:13:13.207687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-25 12:13:13.207701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-25 12:13:13.208125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-25 12:13:13.208139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-25 12:13:13.208517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-25 12:13:13.208531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-25 12:13:13.208951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-25 12:13:13.208965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-25 12:13:13.209645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-25 12:13:13.209660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-25 12:13:13.210167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-25 12:13:13.210183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-25 12:13:13.210625] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-25 12:13:13.210638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.261 qpair failed and we were unable to recover it. 00:27:26.261 [2024-07-25 12:13:13.211071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.261 [2024-07-25 12:13:13.211085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.211563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.211576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.212000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.212014] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.212366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.212380] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.212742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.212756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.213254] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.213268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.213621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.213635] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.214058] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.214072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.214505] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.214519] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.214935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.214949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.215413] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.215428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.215906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.215921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.216349] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.216363] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.216855] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.216869] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.217244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.217259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.217631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.217645] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.217795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.217808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.218174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.218187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.218670] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.218684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.219127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.219141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.219521] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.219535] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.219967] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.219981] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.220465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.220479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.220831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.220844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.221514] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.221528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.222005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.222018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.222385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.222399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.222840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.222854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.223372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.223386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.223806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.223820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.224272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.224288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.224717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.224730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.225184] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.225198] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.225696] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.225709] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.226192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.226207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.226586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.226600] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.227022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.227035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.262 [2024-07-25 12:13:13.227550] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.262 [2024-07-25 12:13:13.227564] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.262 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.227979] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.227993] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.228364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.228379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.228801] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.228815] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.229271] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.229285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.229765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.229778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.230152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.230166] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.230659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.230672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.231100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.231116] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.231531] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.231545] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.232057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.232072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.232499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.232513] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.232889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.232902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.233385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.233399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.233558] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.233571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.234001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.234015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.234433] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.234447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.234863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.234876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.235241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.235255] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.235755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.235769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.236197] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.236211] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.236641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.236655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.237079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.237093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.237474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.237488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.237978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.237992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.238368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.238382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.238807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.238821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.239193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.239207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.239565] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.239579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.240061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.240076] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.240556] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.240570] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.241029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.241060] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.241477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.241490] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.241910] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.241923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.242335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.242349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.242785] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.242801] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.243186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.243200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.243571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.243585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.263 qpair failed and we were unable to recover it. 00:27:26.263 [2024-07-25 12:13:13.244085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.263 [2024-07-25 12:13:13.244100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.244511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.244525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.244949] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.244964] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.245384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.245399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.245823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.245837] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.246263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.246277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.246634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.246648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.247077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.247091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.247520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.247533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.247911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.247925] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.248293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.248307] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.248815] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.248829] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.249260] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.249273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.249738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.249751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.250244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.250257] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.250489] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.250503] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.250851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.250865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.251292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.251306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.251678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.251691] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.252297] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.252312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.252677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.252690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.253173] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.253187] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.253555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.253568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.254036] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.254054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.254414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.254430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.254803] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.254817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.255183] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.255197] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.255554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.255567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.256001] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.256015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.256457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.256471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.256923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.256936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.257379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.257393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.257825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.257839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.258255] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.258268] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.258683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.258697] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.259128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.259142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.259623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.259637] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.260062] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.260075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.264 [2024-07-25 12:13:13.260452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.264 [2024-07-25 12:13:13.260466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.264 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.260898] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.260911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.261346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.261360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.261806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.261820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.262250] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.262264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.262627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.262641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.262906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.262919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.263294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.263308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.263787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.263800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.264188] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.264202] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.264616] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.264629] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.265063] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.265078] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.265500] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.265514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.265934] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.265948] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.266389] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.266403] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.266826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.266839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.267277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.267290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.267713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.267726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.268152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.268167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.268603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.268617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.269236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.269250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.269681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.269695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.270114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.270128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.270588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.270602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.271024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.271038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.271485] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.271499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.271911] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.271924] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.272406] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.272422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.272858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.272872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.273381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.273395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.273873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.273886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.274333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.274347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.274729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.274743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.275115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.275129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.275606] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.265 [2024-07-25 12:13:13.275620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.265 qpair failed and we were unable to recover it. 00:27:26.265 [2024-07-25 12:13:13.276127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.276141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.276578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.276591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.277004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.277018] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.277387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.277401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.277770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.277784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.278202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.278216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.278642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.278656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.279079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.279092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.279452] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.279466] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.279835] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.279849] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.280277] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.280291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.280735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.280748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.281200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.281214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.281650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.281664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.282087] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.282101] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.282470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.282483] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.282902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.282916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.283293] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.283306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.283722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.283736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.284174] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.284190] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.284553] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.284567] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.285015] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.285029] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.285458] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.285472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.285893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.285906] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.286289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.286303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.286727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.286741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.287131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.287145] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.287506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.287520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.287866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.287880] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.288258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.288272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.266 [2024-07-25 12:13:13.288716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.266 [2024-07-25 12:13:13.288731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.266 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.289161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.289175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.289557] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.289571] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.289999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.290012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.290392] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.290406] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.291029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.291047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.291477] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.291491] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.291970] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.291984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.292361] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.292376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.292729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.292743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.293116] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.293130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.293504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.293517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.293885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.293898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.294351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.294365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.294811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.294824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.295204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.295217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.295586] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.295602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.296030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.296049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.296566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.296579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.296957] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.296970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.297398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.297413] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.297840] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.297854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.298199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.298213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.298585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.298598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.298963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.298976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.299461] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.299475] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.299828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.299842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.300225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.300239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.300678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.300692] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.301126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.301140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.301517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.301530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.301963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.301976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.302336] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.302349] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.302706] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.302719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.303139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.303153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.267 qpair failed and we were unable to recover it. 00:27:26.267 [2024-07-25 12:13:13.303498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.267 [2024-07-25 12:13:13.303512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.303873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.303887] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.304319] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.304333] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.304712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.304726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.305086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.305100] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.305473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.305487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.305847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.305860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.306236] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.306250] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.306669] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.306683] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.307051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.307065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.307421] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.307434] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.307797] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.307811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.308175] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.308189] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.308617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.308631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.309051] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.309065] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.309429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.309443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.309807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.309821] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.310193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.310207] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.310568] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.310582] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.310991] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.311004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.311487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.311501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.311860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.311874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.312247] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.312260] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.312621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.312634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.313088] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.313102] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.313533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.313548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.313923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.313937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.314315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.314328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.314687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.314700] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.315068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.315082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.315440] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.315453] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.315799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.315812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.316178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.316193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.316635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.316649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.317024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.317037] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.317405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.317419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.317716] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.317729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.268 [2024-07-25 12:13:13.318098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.268 [2024-07-25 12:13:13.318112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.268 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.318481] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.318494] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.318844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.318857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.319269] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.319283] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.319714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.319727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.320096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.320110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.320474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.320488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.320863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.320877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.321241] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.321254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.321686] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.321699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.322073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.322086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.322313] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.322327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.322750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.322767] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.323206] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.323220] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.323598] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.323611] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.323978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.323992] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.324348] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.324362] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.324790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.324803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.325161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.325174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.325535] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.325548] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.325964] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.325978] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.326401] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.326415] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.326842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.326856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.327220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.327234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.327595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.327609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.327981] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.327994] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.328359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.328374] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.328819] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.328833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.329258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.329272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.329645] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.329659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.330098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.330112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.330474] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.330488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.330841] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.330854] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.331220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.331234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.331652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.331666] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.332030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.332052] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.332479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.269 [2024-07-25 12:13:13.332493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.269 qpair failed and we were unable to recover it. 00:27:26.269 [2024-07-25 12:13:13.332866] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.332879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.333300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.333314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.333660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.333680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.334111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.334126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.334499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.334512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.334872] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.334885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.335274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.335289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.335653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.335667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.336032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.336049] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.336466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.336479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.336982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.336995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.337426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.337441] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.337878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.337892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.338267] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.338281] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.338647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.338660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.339071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.339086] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.339463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.339477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.339831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.339844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.340273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.340286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.340765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.340778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.341186] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.341200] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.341555] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.341568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.341982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.341996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.342425] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.342440] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.342806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.342819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.343192] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.343206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.343671] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.343684] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.344127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.344141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.344491] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.344505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.344925] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.344938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.345380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.345393] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.345757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.345770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.346122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.346136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.346693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.346706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.346875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.346889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.347265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.347279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.270 [2024-07-25 12:13:13.347703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.270 [2024-07-25 12:13:13.347716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.270 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.348077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.348091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.348512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.348526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.349009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.349023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.349466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.349480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.349834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.349848] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.350225] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.350239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.350609] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.350623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.351054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.351068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.351480] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.351493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.351905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.351919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.352289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.352303] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.352677] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.352690] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.353057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.353071] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.353330] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.353344] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.353913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.353927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.354368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.354382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.354930] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.354943] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.355428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.355442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.355821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.355835] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.356196] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.356210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.356626] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.356639] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.357118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.357132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.357498] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.357511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.357924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.357938] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.358370] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.358383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.358760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.358773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.359220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.359234] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.359713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.359727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.360335] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.271 [2024-07-25 12:13:13.360355] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.271 qpair failed and we were unable to recover it. 00:27:26.271 [2024-07-25 12:13:13.360768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.360781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.361203] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.361217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.361648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.361662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.362086] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.362099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.362460] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.362476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.362852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.362866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.363240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.363254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.363787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.363800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.364164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.364178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.364610] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.364623] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.365131] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.365144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.365662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.365675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.366099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.366113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.366278] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.366291] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.366646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.366660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.367094] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.367108] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.367472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.367486] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.367992] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.368005] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.368362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.368377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.368751] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.368764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.369139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.369153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.369838] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.369851] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.370283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.370298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.370733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.370747] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.371130] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.371144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.371763] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.371778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.372218] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.372232] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.372709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.372722] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.373090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.373104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.373526] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.373540] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.373894] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.373907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.374350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.374367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.374737] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.374750] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.374923] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.374936] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.375368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.375382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.375746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.375759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.376147] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.272 [2024-07-25 12:13:13.376161] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.272 qpair failed and we were unable to recover it. 00:27:26.272 [2024-07-25 12:13:13.376644] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.376657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.377005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.377019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.377434] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.377448] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.378068] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.378082] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.378511] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.378524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.379005] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.379019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.379443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.379458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.379837] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.379850] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.380234] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.380248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.380705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.380719] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.381096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.381110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.381617] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.381631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.382004] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.382017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.382437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.382452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.382952] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.382966] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.383383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.383397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.383761] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.383774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.384195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.384209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.384647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.384661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.385021] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.385035] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.385194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.385208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.385621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.385636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.386080] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.386094] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.386473] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.386487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.386931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.386945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.387315] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.387329] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.387715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.387729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.388164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.388177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.388619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.388633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.388853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.388867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.389306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.389319] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.389743] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.389756] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.390125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.390138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.390510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.390524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.390941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.390955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.391317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.391331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.391695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.391710] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.273 [2024-07-25 12:13:13.392075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.273 [2024-07-25 12:13:13.392088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.273 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.392444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.392457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.392821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.392834] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.393287] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.393301] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.393668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.393682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.394104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.394117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.394600] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.394614] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.394978] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.394991] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.395415] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.395429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.395859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.395873] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.396322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.396335] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.396701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.396715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.397108] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.397122] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.397479] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.397493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.397920] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.397934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.398362] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.398376] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.398878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.398892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.399256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.399270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.399710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.399724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.400097] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.400111] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.400537] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.400551] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.400913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.400927] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.401300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.401314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.401757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.401771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.402258] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.402272] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.402709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.402723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.403229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.403244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.403666] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.403680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.404028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.404041] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.404472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.404488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.404861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.404874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.405104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.405117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.405499] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.405512] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.405889] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.405902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.406327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.406340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.406770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.406783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.407444] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.407458] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.407884] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.274 [2024-07-25 12:13:13.407898] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.274 qpair failed and we were unable to recover it. 00:27:26.274 [2024-07-25 12:13:13.408276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.408292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.408660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.408674] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.408828] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.408842] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.409210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.409224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.409650] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.409663] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.410103] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.410117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.410536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.410550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.410903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.410917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.411358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.411372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.411729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.411743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.412098] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.412112] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.412620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.412634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.413065] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.413080] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.413518] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.413531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.413893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.413909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.414322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.414336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.414789] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.414802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.415163] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.415177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.415602] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.415615] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.416032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.416054] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.416483] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.416496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.416870] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.416884] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.417603] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.417619] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.417990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.418004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.418423] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.418437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.418869] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.418882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.419372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.419386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.419755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.419768] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.420227] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.420241] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.420852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.420866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.421281] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.421295] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.421727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.421741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.422265] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.422279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.422664] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.275 [2024-07-25 12:13:13.422677] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.275 qpair failed and we were unable to recover it. 00:27:26.275 [2024-07-25 12:13:13.423057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.423070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.423515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.423528] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.423701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.423715] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.424178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.424192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.424552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.424565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.424940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.424953] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.425320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.425334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.425787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.425803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.426189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.426203] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.426619] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.426633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.427138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.427152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.427656] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.427671] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.428123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.428137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.428292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.428306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.428674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.428688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.429070] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.429084] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.429659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.429673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.430099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.430113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.430459] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.430472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.430924] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.430937] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.431360] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.431375] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.431796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.431810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.432291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.432306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.432691] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.432704] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.433071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.433085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.433508] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.433522] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.433951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.433965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.434320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.434334] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.434755] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.434769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.435140] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.435155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.435534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.435547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.435903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.435917] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.436343] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.436357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.436795] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.436809] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.437194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.437208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.437640] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.276 [2024-07-25 12:13:13.437655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.276 qpair failed and we were unable to recover it. 00:27:26.276 [2024-07-25 12:13:13.438022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.438036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.438525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.438539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.438888] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.438902] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.439358] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.439372] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.439682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.439695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.440138] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.440152] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.440468] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.440482] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.440966] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.440980] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.441204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.441217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.441639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.441652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.442071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.442085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.442530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.442544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.442906] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.442919] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.443291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.443305] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.443756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.443770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.444212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.444226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.444710] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.444724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.445122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.445136] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.445566] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.445579] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.445994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.446008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.446486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.446500] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.446933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.446947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.447334] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.447348] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.447721] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.447734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.448161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.448174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.448551] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.448565] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.448931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.448945] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.449322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.449336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.449758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.449771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.450251] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.450264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.450685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.450699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.451076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.451090] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.451443] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.451457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.451818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.451831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.452212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.452226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.452450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.452464] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.452818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.277 [2024-07-25 12:13:13.452831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.277 qpair failed and we were unable to recover it. 00:27:26.277 [2024-07-25 12:13:13.453193] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.453206] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.453635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.453649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.454133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.454149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.454523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.454536] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.454897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.454911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.455264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.455278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.455663] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.455676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.456160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.456174] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.456594] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.456608] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.456982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.456995] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.457252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.457266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.457715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.457729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.458111] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.458125] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.458486] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.458499] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.458853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.458866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.459238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.459252] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.459613] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.459626] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.460109] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.460123] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.460534] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.460547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.460916] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.460930] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.461369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.461383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.461742] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.461755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.462177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.462191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.462618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.462631] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.462982] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.462996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.463426] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.463439] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.463810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.463823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.464209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.464223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.464648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.464661] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.465072] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.465089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.465528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.465541] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.465901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.465914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.466279] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.466292] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.466667] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.466681] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.467099] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.467113] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.467604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.467617] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.468042] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.468068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.278 [2024-07-25 12:13:13.468494] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.278 [2024-07-25 12:13:13.468507] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.278 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.468735] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.468749] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.469126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.469140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.469569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.469583] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.470082] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.470097] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.470528] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.470542] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.471025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.471039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.471463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.471477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.471907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.471921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.472386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.472400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.472831] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.472844] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.473208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.473222] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.473639] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.473653] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.474024] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.474038] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.474404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.474418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.474778] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.474792] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.475171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.475186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.475549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.475563] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.475997] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.476011] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.476456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.476472] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.476899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.476913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.477276] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.477290] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.477648] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.477662] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.478145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.478160] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.478597] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.478610] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.478988] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.479003] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.479375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.479389] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.479772] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.479786] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.480159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.480173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.480545] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.480559] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.480928] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.480941] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.481312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.481326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.481760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.481773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.482129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.482143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.482504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.482517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.482977] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.482990] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.483418] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.483432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.483921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.279 [2024-07-25 12:13:13.483935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.279 qpair failed and we were unable to recover it. 00:27:26.279 [2024-07-25 12:13:13.484294] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.484308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.484681] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.484694] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.484987] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.485000] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.485378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.485392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.485760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.485773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.486194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.486208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.486634] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.486649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.487135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.487149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.487497] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.487511] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.487931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.487944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.488380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.488394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.488777] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.488791] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.489205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.489219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.489589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.489602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.489774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.489788] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.490222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.490237] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.490595] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.490609] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.490983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.490998] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.491428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.491442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.491809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.491823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.492239] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.492253] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.492618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.492632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.493019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.493033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.493422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.493436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.493790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.493803] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.494244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.494259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.494693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.494707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.495142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.495156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.495322] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.495336] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.495703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.495717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.496078] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.496092] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.280 [2024-07-25 12:13:13.496472] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.280 [2024-07-25 12:13:13.496487] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.280 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-25 12:13:13.496852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-25 12:13:13.496866] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-25 12:13:13.497115] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-25 12:13:13.497129] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-25 12:13:13.497501] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-25 12:13:13.497514] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-25 12:13:13.497774] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-25 12:13:13.497787] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-25 12:13:13.498229] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-25 12:13:13.498243] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-25 12:13:13.498623] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-25 12:13:13.498636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-25 12:13:13.499096] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-25 12:13:13.499110] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-25 12:13:13.499487] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-25 12:13:13.499501] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-25 12:13:13.499859] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-25 12:13:13.499872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.281 [2024-07-25 12:13:13.500237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.281 [2024-07-25 12:13:13.500251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.281 qpair failed and we were unable to recover it. 00:27:26.549 [2024-07-25 12:13:13.500739] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.549 [2024-07-25 12:13:13.500754] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.549 qpair failed and we were unable to recover it. 00:27:26.549 [2024-07-25 12:13:13.501133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.549 [2024-07-25 12:13:13.501148] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.549 qpair failed and we were unable to recover it. 00:27:26.549 [2024-07-25 12:13:13.501520] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.549 [2024-07-25 12:13:13.501533] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.549 qpair failed and we were unable to recover it. 00:27:26.549 [2024-07-25 12:13:13.501900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.549 [2024-07-25 12:13:13.501914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.549 qpair failed and we were unable to recover it. 00:27:26.549 [2024-07-25 12:13:13.502324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.549 [2024-07-25 12:13:13.502338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.549 qpair failed and we were unable to recover it. 00:27:26.549 [2024-07-25 12:13:13.503019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.549 [2024-07-25 12:13:13.503033] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.549 qpair failed and we were unable to recover it. 00:27:26.549 [2024-07-25 12:13:13.503414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.549 [2024-07-25 12:13:13.503430] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.549 qpair failed and we were unable to recover it. 00:27:26.549 [2024-07-25 12:13:13.503793] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.549 [2024-07-25 12:13:13.503813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.549 qpair failed and we were unable to recover it. 00:27:26.549 [2024-07-25 12:13:13.504177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.504192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.504368] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.504382] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.504741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.504755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.505171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.505185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.505564] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.505578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.505942] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.505956] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.506312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.506327] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.506749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.506764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.506931] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.506944] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.507114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.507128] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.507607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.507620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.507975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.507988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.508350] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.508364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.508879] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.508893] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.509047] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.509062] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.509450] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.509463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.509880] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.509894] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.510263] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.510277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.510693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.510706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.511074] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.511088] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.511538] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.511552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.511935] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.511949] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.512314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.512328] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.512759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.512772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.513145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.513159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.513363] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.513377] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.513748] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.513764] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.514118] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.514132] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.514492] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.514505] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.514990] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.515004] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.515377] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.515391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.515808] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.515822] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.516249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.516263] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.516746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.516760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.517142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.517155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.517516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.517530] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.517893] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.517907] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.550 qpair failed and we were unable to recover it. 00:27:26.550 [2024-07-25 12:13:13.518266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.550 [2024-07-25 12:13:13.518280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.518732] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.518745] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.519014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.519028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.519412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.519427] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.519588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.519601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.519946] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.519959] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.520364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.520378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.520729] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.520743] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.521171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.521185] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.521560] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.521573] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.521999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.522012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.522359] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.522373] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.522810] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.522823] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.523120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.523135] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.523295] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.523308] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.523820] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.523833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.524264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.524278] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.524638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.524652] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.524999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.525013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.525379] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.525394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.525758] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.525771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.526212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.526226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.526687] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.526701] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.527139] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.527153] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.527516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.527529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.527908] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.527921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.528355] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.528369] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.528796] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.528810] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.529240] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.529254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.529718] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.529732] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.529901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.529914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.530302] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.530316] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.530562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.530576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.531000] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.531013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.531435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.531449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.531876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.531889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.532316] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.532330] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.532692] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.532706] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.551 qpair failed and we were unable to recover it. 00:27:26.551 [2024-07-25 12:13:13.533073] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.551 [2024-07-25 12:13:13.533087] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.533512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.533526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.533998] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.534012] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.534530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.534544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.534917] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.534931] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.535307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.535321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.535685] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.535699] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.536075] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.536089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.536448] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.536461] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.536887] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.536900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.537344] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.537357] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.537781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.537794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.538210] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.538224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.538584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.538598] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.539177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.539192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.539366] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.539379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.539757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.539771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.540194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.540208] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.540635] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.540649] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.541328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.541345] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.541712] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.541728] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.542159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.542175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.542533] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.542547] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.542974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.542988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.543405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.543419] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.543851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.543865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.544233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.544248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.544599] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.544613] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.545032] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.545051] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.545408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.545422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.545848] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.545862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.546296] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.546311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.546675] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.546688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.547067] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.547081] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.547451] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.547465] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.547945] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.547958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.548339] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.548353] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.548722] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.552 [2024-07-25 12:13:13.548736] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.552 qpair failed and we were unable to recover it. 00:27:26.552 [2024-07-25 12:13:13.549164] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.549178] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.549641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.549655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.550014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.550028] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.550256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.550271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.550637] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.550651] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.551003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.551017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.551385] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.551399] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.551825] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.551839] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.552201] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.552218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.552571] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.552585] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.552941] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.552955] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.553382] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.553400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.553832] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.553845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.554071] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.554085] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.554449] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.554463] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.554842] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.554856] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.555275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.555289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.555646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.555659] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.556010] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.556023] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.556510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.556525] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.556907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.556921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.557292] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.557306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.557668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.557682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.558049] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.558063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.558424] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.558437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.558863] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.558876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.559303] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.559317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.559733] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.559746] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.560126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.560140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.560515] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.560529] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.560896] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.560909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.561331] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.561346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.561947] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.561960] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.562338] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.562352] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.562792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.562806] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.563237] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.563254] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.563680] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.563698] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.553 [2024-07-25 12:13:13.564079] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.553 [2024-07-25 12:13:13.564093] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.553 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.564464] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.564477] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.564853] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.564867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.565249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.565264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.565631] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.565644] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.566003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.566017] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.566387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.566401] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.566757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.566771] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.567209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.567223] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.567660] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.567673] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.568029] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.568055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.568585] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.568599] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.568971] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.568984] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.569438] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.569452] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.569878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.569892] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.570275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.570289] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.570652] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.570665] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.571092] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.571106] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.571457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.571470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.571861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.571876] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.572252] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.572266] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.572694] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.572707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.573135] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.573149] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.573588] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.573601] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.574054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.574068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.574695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.574708] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.574951] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.574965] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.575333] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.575347] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.575762] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.575776] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.576207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.576221] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.554 [2024-07-25 12:13:13.576578] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.554 [2024-07-25 12:13:13.576591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.554 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.576759] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.576772] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.577228] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.577242] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.577412] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.577425] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.577926] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.577940] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.578307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.578321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.578745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.578758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.579202] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.579217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.579646] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.579660] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.580085] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.580099] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.580475] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.580488] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.580899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.580913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.581284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.581298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.581717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.581731] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.582090] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.582104] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.582471] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.582485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.582962] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.582977] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.583404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.583418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.584030] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.584047] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.584405] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.584418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.584856] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.584870] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.585352] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.585366] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.585724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.585737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.586169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.586183] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.586624] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.586638] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.586999] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.587013] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.587436] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.587451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.587881] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.587895] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.588266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.588280] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.588672] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.588686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.589077] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.589091] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.589465] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.589479] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.589905] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.589918] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.590353] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.590367] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.590738] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.590752] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.591132] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.591146] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.591529] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.591546] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.555 [2024-07-25 12:13:13.591944] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.555 [2024-07-25 12:13:13.591958] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.555 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.592318] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.592332] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.592760] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.592774] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.593142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.593157] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.593530] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.593544] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.593901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.593914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.594506] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.594520] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.595013] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.595027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.595391] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.595405] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.595713] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.595727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.596110] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.596124] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.596577] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.596591] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.596955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.596970] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.597398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.597412] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.597632] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.597646] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.598133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.598147] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.598570] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.598584] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.599012] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.599025] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.599478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.599493] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.599865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.599878] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.600307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.600321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.600734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.600748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.601209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.601224] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.601727] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.601741] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.602178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.602192] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.602552] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.602566] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.603014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.603030] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.603199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.603213] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.603573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.603586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.604019] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.604034] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.604517] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.604531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.604897] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.604911] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.605342] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.605356] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.605791] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.605805] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.606257] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.606271] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.606627] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.606641] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.607057] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.607072] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.607442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.607457] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.556 qpair failed and we were unable to recover it. 00:27:26.556 [2024-07-25 12:13:13.607940] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.556 [2024-07-25 12:13:13.607954] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.608200] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.608214] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.608642] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.608656] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.609035] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.609055] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.609547] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.609562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.609985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.609999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.610422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.610437] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.610851] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.610865] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.611233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.611247] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.611746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.611760] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.612189] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.612204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.612682] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.612695] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.613106] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.613121] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.613539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.613552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.613972] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.613986] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.614403] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.614417] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.614921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.614935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.615364] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.615378] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.615818] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.615831] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.616312] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.616326] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.616756] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.616769] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.617194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.617209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.617688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.617702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.618204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.618218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.618371] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.618385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.618861] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.618875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.619324] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.619338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.619707] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.619720] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.620153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.620167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.620386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.620400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.620770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.620784] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.621127] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.621142] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.621554] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.621568] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.621985] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.621999] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.622354] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.622368] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.622798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.622811] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.623290] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.623304] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.557 qpair failed and we were unable to recover it. 00:27:26.557 [2024-07-25 12:13:13.623715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.557 [2024-07-25 12:13:13.623730] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.624158] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.624172] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.624536] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.624550] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.625028] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.625045] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.625549] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.625562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.625878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.625891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.626326] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.626341] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.626764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.626778] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.627274] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.627288] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.627703] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.627717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.628195] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.628210] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.628653] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.628667] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.629022] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.629036] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.629470] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.629485] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.629909] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.629923] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.630428] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.630442] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.630886] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.630900] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.631126] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.631141] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.631668] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.631682] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.632054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.632070] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.632548] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.632562] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.632994] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.633008] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.633442] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.633456] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.633885] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.633899] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.634378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.634392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.634806] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.634819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.635194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.635209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.635580] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.635594] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.635959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.635973] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.636387] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.636402] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.636899] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.636913] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.637414] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.637428] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.637858] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.637872] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.638378] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.638392] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.638921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.638935] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.639380] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.639394] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.639902] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.639916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.640419] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.640436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.558 qpair failed and we were unable to recover it. 00:27:26.558 [2024-07-25 12:13:13.640854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.558 [2024-07-25 12:13:13.640868] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.641346] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.641360] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.641715] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.641729] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.642153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.642167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.642665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.642678] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.643124] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.643138] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.643641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.643655] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.643878] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.643891] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.644314] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.644331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.644698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.644712] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.645076] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.645089] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.645504] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.645517] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.645876] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.645889] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.646369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.646383] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.646807] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.646820] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.647185] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.647199] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.647416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.647429] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.647723] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.647737] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.648159] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.648173] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.648604] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.648618] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.648975] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.648989] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.649357] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.649370] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.649589] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.649602] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.650014] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.650027] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.650466] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.650480] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.650959] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.650972] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.651408] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.651422] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.651847] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.651860] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.652216] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.652230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.652720] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.652734] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.653217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.653231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.653607] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.653620] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.654120] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.559 [2024-07-25 12:13:13.654133] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.559 qpair failed and we were unable to recover it. 00:27:26.559 [2024-07-25 12:13:13.654611] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.654624] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.655104] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.655117] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.655641] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.655658] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.656100] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.656114] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.656622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.656636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.657161] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.657175] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.657659] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.657672] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.658153] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.658167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.658673] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.658686] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.659040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.659057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.659476] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.659489] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.659907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.659921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.660381] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.660395] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.660895] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.660909] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.661286] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.661300] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.661804] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.661817] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.662332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.662346] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.662768] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.662781] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.663003] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.663016] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.663437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.663451] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.663963] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.663976] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.664404] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.664418] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.664844] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.664857] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.665284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.665298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.665574] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.665588] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.666112] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.666126] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.666563] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.666576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.667002] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.667015] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.667456] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.667470] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.667900] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.667914] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.668328] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.668342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.668562] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.668576] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.669007] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.669020] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.669457] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.669471] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.669903] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.669916] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.670327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.670340] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.670757] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.670770] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.560 [2024-07-25 12:13:13.671125] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.560 [2024-07-25 12:13:13.671139] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.560 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.671651] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.671664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.672170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.672196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.672621] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.672634] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.673061] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.673075] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.673525] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.673539] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.674040] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.674057] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.674482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.674496] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.674816] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.674830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.675259] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.675273] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:26.561 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@862 -- # return 0 00:27:26.561 [2024-07-25 12:13:13.675781] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:27:26.561 [2024-07-25 12:13:13.675797] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:26.561 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.561 [2024-07-25 12:13:13.676283] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.676298] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.676662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.676675] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.677169] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.677184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.677539] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.677552] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.677901] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.677915] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.678416] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.678432] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.678913] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.678926] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.679351] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.679365] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.679862] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.679877] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.680384] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.680397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.680780] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.680794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.681273] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.681287] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.681787] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.681802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.682298] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.682312] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.682798] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.682812] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.683244] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.683259] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.683689] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.683702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.684142] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.684156] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.684734] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.684748] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.685205] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.685219] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.685654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.685668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.686114] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.686130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.686516] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.686531] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.687054] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.687068] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.687512] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.561 [2024-07-25 12:13:13.687526] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.561 qpair failed and we were unable to recover it. 00:27:26.561 [2024-07-25 12:13:13.687983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.687996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.688435] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.688449] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.688933] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.688947] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.689369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.689384] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.689865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.689879] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.690323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.690337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.690711] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.690725] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.691463] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.691478] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.691854] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.691867] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.692374] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.692391] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.692799] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.692813] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.693306] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.693321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.693745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.693759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.694272] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.694286] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.694709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.694723] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.695177] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.695191] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.695618] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.695632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.696128] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.696143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.696573] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.696586] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.697025] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.697039] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.697510] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.697524] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.698052] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.698066] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.698509] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.698527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.698907] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.698921] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.699383] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.699397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.699868] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.699882] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.700372] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.700386] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.700811] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.700824] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.701317] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.701331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.701665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.701679] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.702129] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.702143] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.702575] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.702589] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.703048] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.703063] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.703523] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.703537] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.703921] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.703934] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.704365] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.704379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.704860] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.704874] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.562 [2024-07-25 12:13:13.705422] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.562 [2024-07-25 12:13:13.705436] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.562 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.705821] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.705836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.706264] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.706279] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.707018] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.707032] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.707482] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.707497] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.707873] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.707886] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.708386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.708400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.708822] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.708836] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.709323] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.709337] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:27:26.563 [2024-07-25 12:13:13.709741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.709758] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@19 -- # rpc_cmd bdev_malloc_create 64 512 -b Malloc0 00:27:26.563 [2024-07-25 12:13:13.710261] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.710276] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.563 [2024-07-25 12:13:13.710709] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.563 [2024-07-25 12:13:13.710726] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.711256] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.711270] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.711724] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.711738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.712217] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.712231] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.712622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.712636] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.713060] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.713074] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.713462] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.713476] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.713983] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.713996] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.714513] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.714527] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.714955] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.714968] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.715478] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.715492] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.715875] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.715890] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.716386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.716400] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.716826] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.716840] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.717300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.717314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.717746] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.717761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.718246] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.718261] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.718693] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.718707] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.719211] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.719226] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.719674] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.563 [2024-07-25 12:13:13.719688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.563 qpair failed and we were unable to recover it. 00:27:26.563 [2024-07-25 12:13:13.720122] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.720137] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.720584] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.720597] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.720974] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.720988] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.721429] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.721443] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.721817] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.721830] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.722291] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.722306] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.722794] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.722808] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.723249] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.723264] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.723702] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.723716] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.724235] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.724251] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.724665] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.724680] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.725178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.725193] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.725698] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.725713] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.726224] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.726239] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.726705] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.726721] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.727152] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.727167] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.727649] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.727664] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.728170] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.728184] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.728662] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.728676] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.729117] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.729130] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x23e8f30 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.729238] nvme_tcp.c: 327:nvme_tcp_qpair_set_recv_state: *ERROR*: The recv state of tqpair=0x23f6ff0 is same with the state(5) to be set 00:27:26.564 [2024-07-25 12:13:13.729683] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.729717] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9154000b90 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 Malloc0 00:27:26.564 [2024-07-25 12:13:13.730178] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.730196] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9154000b90 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.730654] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.730669] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9154000b90 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.564 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@21 -- # rpc_cmd nvmf_create_transport -t tcp -o 00:27:26.564 [2024-07-25 12:13:13.731212] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.731227] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9154000b90 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.564 [2024-07-25 12:13:13.731614] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.731628] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9154000b90 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.564 [2024-07-25 12:13:13.732123] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.732140] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9154000b90 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.732620] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.732633] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9154000b90 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.733171] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.733186] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9154000b90 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.733688] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.733702] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9154000b90 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.734209] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.734225] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9154000b90 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.734765] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.734779] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f9154000b90 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.735345] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.735364] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.735874] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.735885] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.736386] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.736397] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.736891] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.736901] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.564 [2024-07-25 12:13:13.737398] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.564 [2024-07-25 12:13:13.737408] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.564 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.737553] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:26.565 [2024-07-25 12:13:13.737845] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.737855] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.738327] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.738338] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.738790] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.738800] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.739311] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.739321] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.739728] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.739738] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.740220] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.740230] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.740741] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.740751] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.741289] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.741299] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.741770] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.741783] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.742307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.742318] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.742745] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.742755] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.565 [2024-07-25 12:13:13.743222] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.743233] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 00:27:26.565 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.565 [2024-07-25 12:13:13.743750] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.743761] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.565 [2024-07-25 12:13:13.744307] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.744317] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.744809] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.744819] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.745320] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.745331] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.745753] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.745763] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.746207] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.746217] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.746714] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.746724] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.747199] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.747209] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.747701] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.747711] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.748238] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.748248] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.748695] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.748705] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.749194] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.749204] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.749647] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.749657] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.750145] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.750155] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.750569] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.750578] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.751009] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.751019] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.751437] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.751447] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.751865] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.751875] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.752369] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.752379] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.752823] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.752833] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.753304] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.753314] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.753834] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.565 [2024-07-25 12:13:13.753845] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.565 qpair failed and we were unable to recover it. 00:27:26.565 [2024-07-25 12:13:13.754375] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.754385] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.754852] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.754862] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.566 [2024-07-25 12:13:13.755284] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.755294] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@24 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Malloc0 00:27:26.566 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.566 [2024-07-25 12:13:13.755788] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.755798] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.566 [2024-07-25 12:13:13.756300] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.756311] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.756730] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.756740] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.757160] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.757170] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.757638] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.757648] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.758167] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.758177] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.758678] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.758688] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.759204] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.759216] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.759749] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.759759] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.760275] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.760285] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.760784] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.760794] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.761266] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.761277] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.761792] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.761802] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.762332] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.762342] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.762764] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.762773] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.566 [2024-07-25 12:13:13.763233] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.763244] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@25 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:27:26.566 [2024-07-25 12:13:13.763658] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.763668] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.566 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.566 [2024-07-25 12:13:13.764133] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.764144] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.764622] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.764632] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.765149] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.765159] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.765717] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.765727] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.766208] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.766218] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.766584] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:27:26.566 [2024-07-25 12:13:13.766731] posix.c:1023:posix_sock_create: *ERROR*: connect() failed, errno = 111 00:27:26.566 [2024-07-25 12:13:13.766742] nvme_tcp.c:2383:nvme_tcp_qpair_connect_sock: *ERROR*: sock connection error of tqpair=0x7f914c000b90 with addr=10.0.0.2, port=4420 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 [2024-07-25 12:13:13.768177] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.566 [2024-07-25 12:13:13.768337] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.566 [2024-07-25 12:13:13.768357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.566 [2024-07-25 12:13:13.768365] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.566 [2024-07-25 12:13:13.768372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.566 [2024-07-25 12:13:13.768393] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.566 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@26 -- # rpc_cmd nvmf_subsystem_add_listener discovery -t tcp -a 10.0.0.2 -s 4420 00:27:26.566 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@559 -- # xtrace_disable 00:27:26.566 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:26.566 [2024-07-25 12:13:13.778187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.566 [2024-07-25 12:13:13.778339] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.566 [2024-07-25 12:13:13.778358] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.566 [2024-07-25 12:13:13.778366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.566 [2024-07-25 12:13:13.778372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.566 [2024-07-25 12:13:13.778389] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.566 qpair failed and we were unable to recover it. 00:27:26.566 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:27:26.566 12:13:13 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@50 -- # wait 484941 00:27:26.567 [2024-07-25 12:13:13.788160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.567 [2024-07-25 12:13:13.788300] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.567 [2024-07-25 12:13:13.788318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.567 [2024-07-25 12:13:13.788326] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.567 [2024-07-25 12:13:13.788332] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.567 [2024-07-25 12:13:13.788349] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.567 qpair failed and we were unable to recover it. 00:27:26.828 [2024-07-25 12:13:13.798145] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.828 [2024-07-25 12:13:13.798294] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.828 [2024-07-25 12:13:13.798312] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.828 [2024-07-25 12:13:13.798320] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.828 [2024-07-25 12:13:13.798326] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.828 [2024-07-25 12:13:13.798344] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.828 qpair failed and we were unable to recover it. 00:27:26.828 [2024-07-25 12:13:13.808354] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.828 [2024-07-25 12:13:13.808512] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.828 [2024-07-25 12:13:13.808530] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.828 [2024-07-25 12:13:13.808537] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.828 [2024-07-25 12:13:13.808543] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.828 [2024-07-25 12:13:13.808560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.828 qpair failed and we were unable to recover it. 00:27:26.828 [2024-07-25 12:13:13.818214] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.828 [2024-07-25 12:13:13.818358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.828 [2024-07-25 12:13:13.818376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.828 [2024-07-25 12:13:13.818383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.828 [2024-07-25 12:13:13.818389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.828 [2024-07-25 12:13:13.818406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.828 qpair failed and we were unable to recover it. 00:27:26.828 [2024-07-25 12:13:13.828227] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.828 [2024-07-25 12:13:13.828374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.828 [2024-07-25 12:13:13.828392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.828 [2024-07-25 12:13:13.828403] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.828 [2024-07-25 12:13:13.828409] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.828 [2024-07-25 12:13:13.828427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.828 qpair failed and we were unable to recover it. 00:27:26.828 [2024-07-25 12:13:13.838157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.828 [2024-07-25 12:13:13.838304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.828 [2024-07-25 12:13:13.838322] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.828 [2024-07-25 12:13:13.838329] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.828 [2024-07-25 12:13:13.838335] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.829 [2024-07-25 12:13:13.838352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.829 qpair failed and we were unable to recover it. 00:27:26.829 [2024-07-25 12:13:13.848217] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.829 [2024-07-25 12:13:13.848364] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.829 [2024-07-25 12:13:13.848381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.829 [2024-07-25 12:13:13.848388] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.829 [2024-07-25 12:13:13.848394] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.829 [2024-07-25 12:13:13.848410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.829 qpair failed and we were unable to recover it. 00:27:26.829 [2024-07-25 12:13:13.858289] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.829 [2024-07-25 12:13:13.858434] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.829 [2024-07-25 12:13:13.858452] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.829 [2024-07-25 12:13:13.858459] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.829 [2024-07-25 12:13:13.858466] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.829 [2024-07-25 12:13:13.858484] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.829 qpair failed and we were unable to recover it. 00:27:26.829 [2024-07-25 12:13:13.868318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.829 [2024-07-25 12:13:13.868461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.829 [2024-07-25 12:13:13.868478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.829 [2024-07-25 12:13:13.868486] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.829 [2024-07-25 12:13:13.868491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.829 [2024-07-25 12:13:13.868508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.829 qpair failed and we were unable to recover it. 00:27:26.829 [2024-07-25 12:13:13.878337] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.829 [2024-07-25 12:13:13.878484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.829 [2024-07-25 12:13:13.878502] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.829 [2024-07-25 12:13:13.878508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.829 [2024-07-25 12:13:13.878514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.829 [2024-07-25 12:13:13.878531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.829 qpair failed and we were unable to recover it. 00:27:26.829 [2024-07-25 12:13:13.888429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.829 [2024-07-25 12:13:13.888591] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.829 [2024-07-25 12:13:13.888609] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.829 [2024-07-25 12:13:13.888617] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.829 [2024-07-25 12:13:13.888622] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.829 [2024-07-25 12:13:13.888639] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.829 qpair failed and we were unable to recover it. 00:27:26.829 [2024-07-25 12:13:13.898415] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.829 [2024-07-25 12:13:13.898561] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.829 [2024-07-25 12:13:13.898579] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.829 [2024-07-25 12:13:13.898587] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.829 [2024-07-25 12:13:13.898595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.829 [2024-07-25 12:13:13.898613] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.829 qpair failed and we were unable to recover it. 00:27:26.829 [2024-07-25 12:13:13.908431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.829 [2024-07-25 12:13:13.908572] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.829 [2024-07-25 12:13:13.908590] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.829 [2024-07-25 12:13:13.908596] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.829 [2024-07-25 12:13:13.908602] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.829 [2024-07-25 12:13:13.908620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.829 qpair failed and we were unable to recover it. 00:27:26.829 [2024-07-25 12:13:13.918412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.829 [2024-07-25 12:13:13.918559] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.829 [2024-07-25 12:13:13.918580] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.829 [2024-07-25 12:13:13.918588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.829 [2024-07-25 12:13:13.918594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.829 [2024-07-25 12:13:13.918610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.829 qpair failed and we were unable to recover it. 00:27:26.829 [2024-07-25 12:13:13.928441] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.829 [2024-07-25 12:13:13.928587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.829 [2024-07-25 12:13:13.928605] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.829 [2024-07-25 12:13:13.928612] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.829 [2024-07-25 12:13:13.928618] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.829 [2024-07-25 12:13:13.928635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.829 qpair failed and we were unable to recover it. 00:27:26.829 [2024-07-25 12:13:13.938544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.829 [2024-07-25 12:13:13.938689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.829 [2024-07-25 12:13:13.938707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.829 [2024-07-25 12:13:13.938714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.829 [2024-07-25 12:13:13.938720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.829 [2024-07-25 12:13:13.938736] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.829 qpair failed and we were unable to recover it. 00:27:26.829 [2024-07-25 12:13:13.948591] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.829 [2024-07-25 12:13:13.948746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.829 [2024-07-25 12:13:13.948763] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.829 [2024-07-25 12:13:13.948770] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.829 [2024-07-25 12:13:13.948776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.829 [2024-07-25 12:13:13.948793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.829 qpair failed and we were unable to recover it. 00:27:26.829 [2024-07-25 12:13:13.958568] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.829 [2024-07-25 12:13:13.958717] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.829 [2024-07-25 12:13:13.958734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.829 [2024-07-25 12:13:13.958741] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.829 [2024-07-25 12:13:13.958747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.829 [2024-07-25 12:13:13.958767] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.829 qpair failed and we were unable to recover it. 00:27:26.829 [2024-07-25 12:13:13.968632] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.830 [2024-07-25 12:13:13.968776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.830 [2024-07-25 12:13:13.968793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.830 [2024-07-25 12:13:13.968800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.830 [2024-07-25 12:13:13.968806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.830 [2024-07-25 12:13:13.968822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.830 qpair failed and we were unable to recover it. 00:27:26.830 [2024-07-25 12:13:13.978660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.830 [2024-07-25 12:13:13.978799] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.830 [2024-07-25 12:13:13.978818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.830 [2024-07-25 12:13:13.978825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.830 [2024-07-25 12:13:13.978831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.830 [2024-07-25 12:13:13.978846] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.830 qpair failed and we were unable to recover it. 00:27:26.830 [2024-07-25 12:13:13.988682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.830 [2024-07-25 12:13:13.988824] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.830 [2024-07-25 12:13:13.988848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.830 [2024-07-25 12:13:13.988855] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.830 [2024-07-25 12:13:13.988861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.830 [2024-07-25 12:13:13.988878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.830 qpair failed and we were unable to recover it. 00:27:26.830 [2024-07-25 12:13:13.998765] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.830 [2024-07-25 12:13:13.998921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.830 [2024-07-25 12:13:13.998938] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.830 [2024-07-25 12:13:13.998945] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.830 [2024-07-25 12:13:13.998951] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.830 [2024-07-25 12:13:13.998968] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.830 qpair failed and we were unable to recover it. 00:27:26.830 [2024-07-25 12:13:14.008789] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.830 [2024-07-25 12:13:14.008937] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.830 [2024-07-25 12:13:14.008958] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.830 [2024-07-25 12:13:14.008965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.830 [2024-07-25 12:13:14.008971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.830 [2024-07-25 12:13:14.008987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.830 qpair failed and we were unable to recover it. 00:27:26.830 [2024-07-25 12:13:14.018741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.830 [2024-07-25 12:13:14.018891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.830 [2024-07-25 12:13:14.018908] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.830 [2024-07-25 12:13:14.018915] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.830 [2024-07-25 12:13:14.018921] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.830 [2024-07-25 12:13:14.018938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.830 qpair failed and we were unable to recover it. 00:27:26.830 [2024-07-25 12:13:14.028768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.830 [2024-07-25 12:13:14.028909] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.830 [2024-07-25 12:13:14.028927] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.830 [2024-07-25 12:13:14.028934] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.830 [2024-07-25 12:13:14.028940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.830 [2024-07-25 12:13:14.028958] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.830 qpair failed and we were unable to recover it. 00:27:26.830 [2024-07-25 12:13:14.038797] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.830 [2024-07-25 12:13:14.038944] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.830 [2024-07-25 12:13:14.038962] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.830 [2024-07-25 12:13:14.038969] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.830 [2024-07-25 12:13:14.038975] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.830 [2024-07-25 12:13:14.038991] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.830 qpair failed and we were unable to recover it. 00:27:26.830 [2024-07-25 12:13:14.048874] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.830 [2024-07-25 12:13:14.049021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.830 [2024-07-25 12:13:14.049038] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.830 [2024-07-25 12:13:14.049050] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.830 [2024-07-25 12:13:14.049056] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.830 [2024-07-25 12:13:14.049077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.830 qpair failed and we were unable to recover it. 00:27:26.830 [2024-07-25 12:13:14.058900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.830 [2024-07-25 12:13:14.059052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.830 [2024-07-25 12:13:14.059070] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.830 [2024-07-25 12:13:14.059077] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.830 [2024-07-25 12:13:14.059083] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.830 [2024-07-25 12:13:14.059099] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.830 qpair failed and we were unable to recover it. 00:27:26.830 [2024-07-25 12:13:14.068842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:26.830 [2024-07-25 12:13:14.068991] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:26.830 [2024-07-25 12:13:14.069008] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:26.830 [2024-07-25 12:13:14.069015] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:26.830 [2024-07-25 12:13:14.069021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:26.830 [2024-07-25 12:13:14.069038] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:26.830 qpair failed and we were unable to recover it. 00:27:27.091 [2024-07-25 12:13:14.078912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.091 [2024-07-25 12:13:14.079059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.091 [2024-07-25 12:13:14.079077] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.091 [2024-07-25 12:13:14.079084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.091 [2024-07-25 12:13:14.079090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.091 [2024-07-25 12:13:14.079107] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.091 qpair failed and we were unable to recover it. 00:27:27.091 [2024-07-25 12:13:14.088948] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.091 [2024-07-25 12:13:14.089103] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.091 [2024-07-25 12:13:14.089122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.091 [2024-07-25 12:13:14.089129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.091 [2024-07-25 12:13:14.089135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.091 [2024-07-25 12:13:14.089152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.091 qpair failed and we were unable to recover it. 00:27:27.091 [2024-07-25 12:13:14.098980] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.091 [2024-07-25 12:13:14.099132] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.091 [2024-07-25 12:13:14.099150] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.091 [2024-07-25 12:13:14.099157] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.091 [2024-07-25 12:13:14.099163] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.091 [2024-07-25 12:13:14.099180] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.091 qpair failed and we were unable to recover it. 00:27:27.091 [2024-07-25 12:13:14.109021] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.091 [2024-07-25 12:13:14.109166] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.091 [2024-07-25 12:13:14.109184] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.092 [2024-07-25 12:13:14.109191] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.092 [2024-07-25 12:13:14.109197] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.092 [2024-07-25 12:13:14.109213] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.092 qpair failed and we were unable to recover it. 00:27:27.092 [2024-07-25 12:13:14.119017] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.092 [2024-07-25 12:13:14.119194] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.092 [2024-07-25 12:13:14.119212] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.092 [2024-07-25 12:13:14.119219] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.092 [2024-07-25 12:13:14.119225] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.092 [2024-07-25 12:13:14.119241] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.092 qpair failed and we were unable to recover it. 00:27:27.092 [2024-07-25 12:13:14.129079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.092 [2024-07-25 12:13:14.129220] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.092 [2024-07-25 12:13:14.129238] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.092 [2024-07-25 12:13:14.129245] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.092 [2024-07-25 12:13:14.129251] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.092 [2024-07-25 12:13:14.129268] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.092 qpair failed and we were unable to recover it. 00:27:27.092 [2024-07-25 12:13:14.139094] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.092 [2024-07-25 12:13:14.139231] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.092 [2024-07-25 12:13:14.139249] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.092 [2024-07-25 12:13:14.139256] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.092 [2024-07-25 12:13:14.139266] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.092 [2024-07-25 12:13:14.139283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.092 qpair failed and we were unable to recover it. 00:27:27.092 [2024-07-25 12:13:14.149170] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.092 [2024-07-25 12:13:14.149330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.092 [2024-07-25 12:13:14.149348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.092 [2024-07-25 12:13:14.149355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.092 [2024-07-25 12:13:14.149361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.092 [2024-07-25 12:13:14.149378] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.092 qpair failed and we were unable to recover it. 00:27:27.092 [2024-07-25 12:13:14.159136] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.092 [2024-07-25 12:13:14.159280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.092 [2024-07-25 12:13:14.159298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.092 [2024-07-25 12:13:14.159305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.092 [2024-07-25 12:13:14.159311] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.092 [2024-07-25 12:13:14.159328] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.092 qpair failed and we were unable to recover it. 00:27:27.092 [2024-07-25 12:13:14.169183] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.092 [2024-07-25 12:13:14.169327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.092 [2024-07-25 12:13:14.169345] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.092 [2024-07-25 12:13:14.169352] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.092 [2024-07-25 12:13:14.169358] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.092 [2024-07-25 12:13:14.169374] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.092 qpair failed and we were unable to recover it. 00:27:27.092 [2024-07-25 12:13:14.179196] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.092 [2024-07-25 12:13:14.179335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.092 [2024-07-25 12:13:14.179353] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.092 [2024-07-25 12:13:14.179360] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.092 [2024-07-25 12:13:14.179366] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.092 [2024-07-25 12:13:14.179382] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.092 qpair failed and we were unable to recover it. 00:27:27.092 [2024-07-25 12:13:14.189206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.092 [2024-07-25 12:13:14.189350] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.092 [2024-07-25 12:13:14.189367] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.092 [2024-07-25 12:13:14.189375] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.092 [2024-07-25 12:13:14.189380] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.092 [2024-07-25 12:13:14.189397] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.092 qpair failed and we were unable to recover it. 00:27:27.092 [2024-07-25 12:13:14.199250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.092 [2024-07-25 12:13:14.199393] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.092 [2024-07-25 12:13:14.199410] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.092 [2024-07-25 12:13:14.199417] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.092 [2024-07-25 12:13:14.199423] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.092 [2024-07-25 12:13:14.199440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.092 qpair failed and we were unable to recover it. 00:27:27.092 [2024-07-25 12:13:14.209278] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.092 [2024-07-25 12:13:14.209423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.092 [2024-07-25 12:13:14.209440] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.092 [2024-07-25 12:13:14.209447] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.092 [2024-07-25 12:13:14.209452] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.092 [2024-07-25 12:13:14.209469] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.092 qpair failed and we were unable to recover it. 00:27:27.092 [2024-07-25 12:13:14.219315] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.092 [2024-07-25 12:13:14.219461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.092 [2024-07-25 12:13:14.219478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.092 [2024-07-25 12:13:14.219485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.092 [2024-07-25 12:13:14.219491] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.092 [2024-07-25 12:13:14.219507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.092 qpair failed and we were unable to recover it. 00:27:27.092 [2024-07-25 12:13:14.229316] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.092 [2024-07-25 12:13:14.229458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.092 [2024-07-25 12:13:14.229476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.092 [2024-07-25 12:13:14.229487] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.092 [2024-07-25 12:13:14.229492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.092 [2024-07-25 12:13:14.229508] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.092 qpair failed and we were unable to recover it. 00:27:27.092 [2024-07-25 12:13:14.239352] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.092 [2024-07-25 12:13:14.239498] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.092 [2024-07-25 12:13:14.239515] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.092 [2024-07-25 12:13:14.239522] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.093 [2024-07-25 12:13:14.239529] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.093 [2024-07-25 12:13:14.239545] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.093 qpair failed and we were unable to recover it. 00:27:27.093 [2024-07-25 12:13:14.249411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.093 [2024-07-25 12:13:14.249601] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.093 [2024-07-25 12:13:14.249619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.093 [2024-07-25 12:13:14.249626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.093 [2024-07-25 12:13:14.249632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.093 [2024-07-25 12:13:14.249649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.093 qpair failed and we were unable to recover it. 00:27:27.093 [2024-07-25 12:13:14.259439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.093 [2024-07-25 12:13:14.259596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.093 [2024-07-25 12:13:14.259613] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.093 [2024-07-25 12:13:14.259620] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.093 [2024-07-25 12:13:14.259626] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.093 [2024-07-25 12:13:14.259642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.093 qpair failed and we were unable to recover it. 00:27:27.093 [2024-07-25 12:13:14.269463] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.093 [2024-07-25 12:13:14.269607] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.093 [2024-07-25 12:13:14.269624] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.093 [2024-07-25 12:13:14.269632] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.093 [2024-07-25 12:13:14.269637] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.093 [2024-07-25 12:13:14.269654] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.093 qpair failed and we were unable to recover it. 00:27:27.093 [2024-07-25 12:13:14.279479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.093 [2024-07-25 12:13:14.279623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.093 [2024-07-25 12:13:14.279640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.093 [2024-07-25 12:13:14.279647] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.093 [2024-07-25 12:13:14.279653] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.093 [2024-07-25 12:13:14.279669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.093 qpair failed and we were unable to recover it. 00:27:27.093 [2024-07-25 12:13:14.289528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.093 [2024-07-25 12:13:14.289674] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.093 [2024-07-25 12:13:14.289691] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.093 [2024-07-25 12:13:14.289698] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.093 [2024-07-25 12:13:14.289704] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.093 [2024-07-25 12:13:14.289721] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.093 qpair failed and we were unable to recover it. 00:27:27.093 [2024-07-25 12:13:14.299564] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.093 [2024-07-25 12:13:14.299701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.093 [2024-07-25 12:13:14.299718] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.093 [2024-07-25 12:13:14.299725] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.093 [2024-07-25 12:13:14.299732] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.093 [2024-07-25 12:13:14.299748] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.093 qpair failed and we were unable to recover it. 00:27:27.093 [2024-07-25 12:13:14.309592] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.093 [2024-07-25 12:13:14.309736] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.093 [2024-07-25 12:13:14.309754] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.093 [2024-07-25 12:13:14.309761] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.093 [2024-07-25 12:13:14.309767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.093 [2024-07-25 12:13:14.309783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.093 qpair failed and we were unable to recover it. 00:27:27.093 [2024-07-25 12:13:14.319597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.093 [2024-07-25 12:13:14.319740] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.093 [2024-07-25 12:13:14.319757] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.093 [2024-07-25 12:13:14.319768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.093 [2024-07-25 12:13:14.319774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.093 [2024-07-25 12:13:14.319790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.093 qpair failed and we were unable to recover it. 00:27:27.093 [2024-07-25 12:13:14.329636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.093 [2024-07-25 12:13:14.329779] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.093 [2024-07-25 12:13:14.329796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.093 [2024-07-25 12:13:14.329804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.093 [2024-07-25 12:13:14.329809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.093 [2024-07-25 12:13:14.329826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.093 qpair failed and we were unable to recover it. 00:27:27.093 [2024-07-25 12:13:14.339662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.093 [2024-07-25 12:13:14.339801] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.093 [2024-07-25 12:13:14.339819] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.093 [2024-07-25 12:13:14.339826] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.093 [2024-07-25 12:13:14.339832] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.093 [2024-07-25 12:13:14.339848] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.093 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-25 12:13:14.349677] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.354 [2024-07-25 12:13:14.349822] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.354 [2024-07-25 12:13:14.349839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.354 [2024-07-25 12:13:14.349846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.354 [2024-07-25 12:13:14.349852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.354 [2024-07-25 12:13:14.349869] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-25 12:13:14.359715] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.354 [2024-07-25 12:13:14.359856] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.354 [2024-07-25 12:13:14.359873] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.354 [2024-07-25 12:13:14.359880] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.354 [2024-07-25 12:13:14.359886] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.354 [2024-07-25 12:13:14.359902] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.354 qpair failed and we were unable to recover it. 00:27:27.354 [2024-07-25 12:13:14.369967] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.354 [2024-07-25 12:13:14.370117] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.354 [2024-07-25 12:13:14.370134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.355 [2024-07-25 12:13:14.370141] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.355 [2024-07-25 12:13:14.370147] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.355 [2024-07-25 12:13:14.370163] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-25 12:13:14.379738] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.355 [2024-07-25 12:13:14.379927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.355 [2024-07-25 12:13:14.379945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.355 [2024-07-25 12:13:14.379952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.355 [2024-07-25 12:13:14.379958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.355 [2024-07-25 12:13:14.379974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-25 12:13:14.389810] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.355 [2024-07-25 12:13:14.389956] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.355 [2024-07-25 12:13:14.389973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.355 [2024-07-25 12:13:14.389980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.355 [2024-07-25 12:13:14.389986] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.355 [2024-07-25 12:13:14.390003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-25 12:13:14.399830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.355 [2024-07-25 12:13:14.399983] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.355 [2024-07-25 12:13:14.400001] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.355 [2024-07-25 12:13:14.400008] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.355 [2024-07-25 12:13:14.400014] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.355 [2024-07-25 12:13:14.400030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-25 12:13:14.409897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.355 [2024-07-25 12:13:14.410059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.355 [2024-07-25 12:13:14.410080] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.355 [2024-07-25 12:13:14.410087] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.355 [2024-07-25 12:13:14.410093] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.355 [2024-07-25 12:13:14.410110] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-25 12:13:14.419832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.355 [2024-07-25 12:13:14.419980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.355 [2024-07-25 12:13:14.419997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.355 [2024-07-25 12:13:14.420005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.355 [2024-07-25 12:13:14.420011] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.355 [2024-07-25 12:13:14.420027] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-25 12:13:14.429851] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.355 [2024-07-25 12:13:14.429994] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.355 [2024-07-25 12:13:14.430011] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.355 [2024-07-25 12:13:14.430018] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.355 [2024-07-25 12:13:14.430024] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.355 [2024-07-25 12:13:14.430040] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-25 12:13:14.439938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.355 [2024-07-25 12:13:14.440086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.355 [2024-07-25 12:13:14.440103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.355 [2024-07-25 12:13:14.440110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.355 [2024-07-25 12:13:14.440116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.355 [2024-07-25 12:13:14.440134] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-25 12:13:14.449986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.355 [2024-07-25 12:13:14.450134] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.355 [2024-07-25 12:13:14.450151] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.355 [2024-07-25 12:13:14.450158] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.355 [2024-07-25 12:13:14.450164] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.355 [2024-07-25 12:13:14.450188] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-25 12:13:14.460025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.355 [2024-07-25 12:13:14.460180] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.355 [2024-07-25 12:13:14.460197] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.355 [2024-07-25 12:13:14.460204] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.355 [2024-07-25 12:13:14.460209] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.355 [2024-07-25 12:13:14.460226] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-25 12:13:14.470032] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.355 [2024-07-25 12:13:14.470174] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.355 [2024-07-25 12:13:14.470192] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.355 [2024-07-25 12:13:14.470199] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.355 [2024-07-25 12:13:14.470205] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.355 [2024-07-25 12:13:14.470221] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-25 12:13:14.480068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.355 [2024-07-25 12:13:14.480218] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.355 [2024-07-25 12:13:14.480235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.355 [2024-07-25 12:13:14.480243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.355 [2024-07-25 12:13:14.480248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.355 [2024-07-25 12:13:14.480265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-25 12:13:14.490112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.355 [2024-07-25 12:13:14.490255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.355 [2024-07-25 12:13:14.490272] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.355 [2024-07-25 12:13:14.490279] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.355 [2024-07-25 12:13:14.490285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.355 [2024-07-25 12:13:14.490302] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.355 qpair failed and we were unable to recover it. 00:27:27.355 [2024-07-25 12:13:14.500148] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.355 [2024-07-25 12:13:14.500297] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.355 [2024-07-25 12:13:14.500318] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.355 [2024-07-25 12:13:14.500325] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.356 [2024-07-25 12:13:14.500331] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.356 [2024-07-25 12:13:14.500348] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-25 12:13:14.510178] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.356 [2024-07-25 12:13:14.510315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.356 [2024-07-25 12:13:14.510333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.356 [2024-07-25 12:13:14.510340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.356 [2024-07-25 12:13:14.510346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.356 [2024-07-25 12:13:14.510362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-25 12:13:14.520297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.356 [2024-07-25 12:13:14.520443] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.356 [2024-07-25 12:13:14.520460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.356 [2024-07-25 12:13:14.520467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.356 [2024-07-25 12:13:14.520473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.356 [2024-07-25 12:13:14.520490] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-25 12:13:14.530235] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.356 [2024-07-25 12:13:14.530383] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.356 [2024-07-25 12:13:14.530401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.356 [2024-07-25 12:13:14.530408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.356 [2024-07-25 12:13:14.530413] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.356 [2024-07-25 12:13:14.530430] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-25 12:13:14.540293] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.356 [2024-07-25 12:13:14.540449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.356 [2024-07-25 12:13:14.540467] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.356 [2024-07-25 12:13:14.540474] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.356 [2024-07-25 12:13:14.540483] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.356 [2024-07-25 12:13:14.540500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-25 12:13:14.550284] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.356 [2024-07-25 12:13:14.550425] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.356 [2024-07-25 12:13:14.550442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.356 [2024-07-25 12:13:14.550450] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.356 [2024-07-25 12:13:14.550456] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.356 [2024-07-25 12:13:14.550472] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-25 12:13:14.560297] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.356 [2024-07-25 12:13:14.560444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.356 [2024-07-25 12:13:14.560461] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.356 [2024-07-25 12:13:14.560468] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.356 [2024-07-25 12:13:14.560474] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.356 [2024-07-25 12:13:14.560491] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-25 12:13:14.570351] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.356 [2024-07-25 12:13:14.570493] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.356 [2024-07-25 12:13:14.570510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.356 [2024-07-25 12:13:14.570517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.356 [2024-07-25 12:13:14.570523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.356 [2024-07-25 12:13:14.570539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-25 12:13:14.580373] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.356 [2024-07-25 12:13:14.580514] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.356 [2024-07-25 12:13:14.580531] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.356 [2024-07-25 12:13:14.580538] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.356 [2024-07-25 12:13:14.580544] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.356 [2024-07-25 12:13:14.580560] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-25 12:13:14.590376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.356 [2024-07-25 12:13:14.590523] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.356 [2024-07-25 12:13:14.590541] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.356 [2024-07-25 12:13:14.590548] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.356 [2024-07-25 12:13:14.590554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.356 [2024-07-25 12:13:14.590570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.356 [2024-07-25 12:13:14.600411] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.356 [2024-07-25 12:13:14.600551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.356 [2024-07-25 12:13:14.600569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.356 [2024-07-25 12:13:14.600576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.356 [2024-07-25 12:13:14.600582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.356 [2024-07-25 12:13:14.600599] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.356 qpair failed and we were unable to recover it. 00:27:27.618 [2024-07-25 12:13:14.610422] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.618 [2024-07-25 12:13:14.610567] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.618 [2024-07-25 12:13:14.610585] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.618 [2024-07-25 12:13:14.610593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.618 [2024-07-25 12:13:14.610599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.618 [2024-07-25 12:13:14.610616] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-07-25 12:13:14.620483] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.618 [2024-07-25 12:13:14.620623] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.618 [2024-07-25 12:13:14.620641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.618 [2024-07-25 12:13:14.620648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.618 [2024-07-25 12:13:14.620655] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.618 [2024-07-25 12:13:14.620672] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-07-25 12:13:14.630513] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.618 [2024-07-25 12:13:14.630660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.618 [2024-07-25 12:13:14.630678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.618 [2024-07-25 12:13:14.630689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.618 [2024-07-25 12:13:14.630696] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.618 [2024-07-25 12:13:14.630712] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-07-25 12:13:14.640521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.618 [2024-07-25 12:13:14.640662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.618 [2024-07-25 12:13:14.640680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.618 [2024-07-25 12:13:14.640687] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.618 [2024-07-25 12:13:14.640693] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.618 [2024-07-25 12:13:14.640710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-07-25 12:13:14.650563] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.618 [2024-07-25 12:13:14.650707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.618 [2024-07-25 12:13:14.650725] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.618 [2024-07-25 12:13:14.650732] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.618 [2024-07-25 12:13:14.650738] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.618 [2024-07-25 12:13:14.650755] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-07-25 12:13:14.660606] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.618 [2024-07-25 12:13:14.660747] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.618 [2024-07-25 12:13:14.660764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.618 [2024-07-25 12:13:14.660771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.618 [2024-07-25 12:13:14.660777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.618 [2024-07-25 12:13:14.660794] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-07-25 12:13:14.670627] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.618 [2024-07-25 12:13:14.670785] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.618 [2024-07-25 12:13:14.670803] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.618 [2024-07-25 12:13:14.670810] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.618 [2024-07-25 12:13:14.670816] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.618 [2024-07-25 12:13:14.670833] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-07-25 12:13:14.680639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.618 [2024-07-25 12:13:14.680780] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.618 [2024-07-25 12:13:14.680797] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.618 [2024-07-25 12:13:14.680805] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.618 [2024-07-25 12:13:14.680810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.618 [2024-07-25 12:13:14.680827] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-07-25 12:13:14.690662] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.618 [2024-07-25 12:13:14.690810] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.618 [2024-07-25 12:13:14.690828] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.618 [2024-07-25 12:13:14.690835] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.618 [2024-07-25 12:13:14.690841] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.618 [2024-07-25 12:13:14.690858] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-07-25 12:13:14.700697] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.618 [2024-07-25 12:13:14.700845] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.618 [2024-07-25 12:13:14.700862] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.618 [2024-07-25 12:13:14.700870] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.618 [2024-07-25 12:13:14.700875] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.618 [2024-07-25 12:13:14.700893] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.618 qpair failed and we were unable to recover it. 00:27:27.618 [2024-07-25 12:13:14.710717] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.619 [2024-07-25 12:13:14.710857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.619 [2024-07-25 12:13:14.710874] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.619 [2024-07-25 12:13:14.710882] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.619 [2024-07-25 12:13:14.710888] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.619 [2024-07-25 12:13:14.710905] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-07-25 12:13:14.720785] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.619 [2024-07-25 12:13:14.720928] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.619 [2024-07-25 12:13:14.720945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.619 [2024-07-25 12:13:14.720956] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.619 [2024-07-25 12:13:14.720962] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.619 [2024-07-25 12:13:14.720980] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-07-25 12:13:14.730791] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.619 [2024-07-25 12:13:14.730939] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.619 [2024-07-25 12:13:14.730957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.619 [2024-07-25 12:13:14.730965] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.619 [2024-07-25 12:13:14.730971] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.619 [2024-07-25 12:13:14.730987] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-07-25 12:13:14.740856] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.619 [2024-07-25 12:13:14.740997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.619 [2024-07-25 12:13:14.741015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.619 [2024-07-25 12:13:14.741022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.619 [2024-07-25 12:13:14.741028] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.619 [2024-07-25 12:13:14.741051] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-07-25 12:13:14.750880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.619 [2024-07-25 12:13:14.751047] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.619 [2024-07-25 12:13:14.751066] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.619 [2024-07-25 12:13:14.751073] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.619 [2024-07-25 12:13:14.751079] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.619 [2024-07-25 12:13:14.751095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-07-25 12:13:14.760866] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.619 [2024-07-25 12:13:14.761009] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.619 [2024-07-25 12:13:14.761026] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.619 [2024-07-25 12:13:14.761033] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.619 [2024-07-25 12:13:14.761039] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.619 [2024-07-25 12:13:14.761061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-07-25 12:13:14.770950] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.619 [2024-07-25 12:13:14.771119] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.619 [2024-07-25 12:13:14.771137] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.619 [2024-07-25 12:13:14.771144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.619 [2024-07-25 12:13:14.771150] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.619 [2024-07-25 12:13:14.771167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-07-25 12:13:14.780958] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.619 [2024-07-25 12:13:14.781116] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.619 [2024-07-25 12:13:14.781134] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.619 [2024-07-25 12:13:14.781142] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.619 [2024-07-25 12:13:14.781148] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.619 [2024-07-25 12:13:14.781164] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-07-25 12:13:14.790887] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.619 [2024-07-25 12:13:14.791035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.619 [2024-07-25 12:13:14.791060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.619 [2024-07-25 12:13:14.791067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.619 [2024-07-25 12:13:14.791073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.619 [2024-07-25 12:13:14.791090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-07-25 12:13:14.800922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.619 [2024-07-25 12:13:14.801281] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.619 [2024-07-25 12:13:14.801298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.619 [2024-07-25 12:13:14.801305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.619 [2024-07-25 12:13:14.801310] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.619 [2024-07-25 12:13:14.801326] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-07-25 12:13:14.810944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.619 [2024-07-25 12:13:14.811097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.619 [2024-07-25 12:13:14.811117] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.619 [2024-07-25 12:13:14.811125] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.619 [2024-07-25 12:13:14.811130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.619 [2024-07-25 12:13:14.811147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-07-25 12:13:14.821030] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.619 [2024-07-25 12:13:14.821212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.619 [2024-07-25 12:13:14.821230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.619 [2024-07-25 12:13:14.821237] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.619 [2024-07-25 12:13:14.821243] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.619 [2024-07-25 12:13:14.821260] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.619 qpair failed and we were unable to recover it. 00:27:27.619 [2024-07-25 12:13:14.830999] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.619 [2024-07-25 12:13:14.831147] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.619 [2024-07-25 12:13:14.831165] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.620 [2024-07-25 12:13:14.831172] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.620 [2024-07-25 12:13:14.831178] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.620 [2024-07-25 12:13:14.831194] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-07-25 12:13:14.841150] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.620 [2024-07-25 12:13:14.841292] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.620 [2024-07-25 12:13:14.841311] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.620 [2024-07-25 12:13:14.841319] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.620 [2024-07-25 12:13:14.841325] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.620 [2024-07-25 12:13:14.841341] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-07-25 12:13:14.851060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.620 [2024-07-25 12:13:14.851212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.620 [2024-07-25 12:13:14.851229] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.620 [2024-07-25 12:13:14.851235] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.620 [2024-07-25 12:13:14.851241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.620 [2024-07-25 12:13:14.851261] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.620 [2024-07-25 12:13:14.861173] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.620 [2024-07-25 12:13:14.861315] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.620 [2024-07-25 12:13:14.861333] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.620 [2024-07-25 12:13:14.861340] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.620 [2024-07-25 12:13:14.861346] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.620 [2024-07-25 12:13:14.861363] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.620 qpair failed and we were unable to recover it. 00:27:27.881 [2024-07-25 12:13:14.871157] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.881 [2024-07-25 12:13:14.871302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.881 [2024-07-25 12:13:14.871320] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.881 [2024-07-25 12:13:14.871327] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.881 [2024-07-25 12:13:14.871333] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.881 [2024-07-25 12:13:14.871350] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.881 qpair failed and we were unable to recover it. 00:27:27.881 [2024-07-25 12:13:14.881215] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.881 [2024-07-25 12:13:14.881358] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.881 [2024-07-25 12:13:14.881376] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.881 [2024-07-25 12:13:14.881383] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.881 [2024-07-25 12:13:14.881389] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.881 [2024-07-25 12:13:14.881406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.881 qpair failed and we were unable to recover it. 00:27:27.881 [2024-07-25 12:13:14.891257] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.881 [2024-07-25 12:13:14.891399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.881 [2024-07-25 12:13:14.891416] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.881 [2024-07-25 12:13:14.891423] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.881 [2024-07-25 12:13:14.891429] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.881 [2024-07-25 12:13:14.891446] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.881 qpair failed and we were unable to recover it. 00:27:27.881 [2024-07-25 12:13:14.901220] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.881 [2024-07-25 12:13:14.901360] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.881 [2024-07-25 12:13:14.901381] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.881 [2024-07-25 12:13:14.901387] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.881 [2024-07-25 12:13:14.901393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.881 [2024-07-25 12:13:14.901410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.881 qpair failed and we were unable to recover it. 00:27:27.881 [2024-07-25 12:13:14.911319] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.881 [2024-07-25 12:13:14.911460] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.881 [2024-07-25 12:13:14.911478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.881 [2024-07-25 12:13:14.911484] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.881 [2024-07-25 12:13:14.911490] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.881 [2024-07-25 12:13:14.911507] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.881 qpair failed and we were unable to recover it. 00:27:27.881 [2024-07-25 12:13:14.921306] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.881 [2024-07-25 12:13:14.921485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.881 [2024-07-25 12:13:14.921503] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.881 [2024-07-25 12:13:14.921510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.881 [2024-07-25 12:13:14.921516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.881 [2024-07-25 12:13:14.921533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.881 qpair failed and we were unable to recover it. 00:27:27.881 [2024-07-25 12:13:14.931303] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.882 [2024-07-25 12:13:14.931449] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.882 [2024-07-25 12:13:14.931468] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.882 [2024-07-25 12:13:14.931475] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.882 [2024-07-25 12:13:14.931481] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.882 [2024-07-25 12:13:14.931501] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.882 qpair failed and we were unable to recover it. 00:27:27.882 [2024-07-25 12:13:14.941335] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.882 [2024-07-25 12:13:14.941477] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.882 [2024-07-25 12:13:14.941495] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.882 [2024-07-25 12:13:14.941502] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.882 [2024-07-25 12:13:14.941513] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.882 [2024-07-25 12:13:14.941530] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.882 qpair failed and we were unable to recover it. 00:27:27.882 [2024-07-25 12:13:14.951456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.882 [2024-07-25 12:13:14.951604] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.882 [2024-07-25 12:13:14.951621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.882 [2024-07-25 12:13:14.951629] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.882 [2024-07-25 12:13:14.951635] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.882 [2024-07-25 12:13:14.951652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.882 qpair failed and we were unable to recover it. 00:27:27.882 [2024-07-25 12:13:14.961393] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.882 [2024-07-25 12:13:14.961538] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.882 [2024-07-25 12:13:14.961556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.882 [2024-07-25 12:13:14.961563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.882 [2024-07-25 12:13:14.961570] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.882 [2024-07-25 12:13:14.961587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.882 qpair failed and we were unable to recover it. 00:27:27.882 [2024-07-25 12:13:14.971498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.882 [2024-07-25 12:13:14.971641] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.882 [2024-07-25 12:13:14.971659] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.882 [2024-07-25 12:13:14.971666] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.882 [2024-07-25 12:13:14.971672] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.882 [2024-07-25 12:13:14.971689] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.882 qpair failed and we were unable to recover it. 00:27:27.882 [2024-07-25 12:13:14.981540] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.882 [2024-07-25 12:13:14.981884] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.882 [2024-07-25 12:13:14.981901] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.882 [2024-07-25 12:13:14.981908] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.882 [2024-07-25 12:13:14.981913] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.882 [2024-07-25 12:13:14.981929] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.882 qpair failed and we were unable to recover it. 00:27:27.882 [2024-07-25 12:13:14.991479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.882 [2024-07-25 12:13:14.991624] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.882 [2024-07-25 12:13:14.991641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.882 [2024-07-25 12:13:14.991648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.882 [2024-07-25 12:13:14.991654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.882 [2024-07-25 12:13:14.991671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.882 qpair failed and we were unable to recover it. 00:27:27.882 [2024-07-25 12:13:15.001560] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.882 [2024-07-25 12:13:15.001705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.882 [2024-07-25 12:13:15.001722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.882 [2024-07-25 12:13:15.001730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.882 [2024-07-25 12:13:15.001736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.882 [2024-07-25 12:13:15.001752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.882 qpair failed and we were unable to recover it. 00:27:27.882 [2024-07-25 12:13:15.011749] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.882 [2024-07-25 12:13:15.011899] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.882 [2024-07-25 12:13:15.011917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.882 [2024-07-25 12:13:15.011924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.882 [2024-07-25 12:13:15.011930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.882 [2024-07-25 12:13:15.011947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.882 qpair failed and we were unable to recover it. 00:27:27.882 [2024-07-25 12:13:15.021618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.882 [2024-07-25 12:13:15.021757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.882 [2024-07-25 12:13:15.021775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.882 [2024-07-25 12:13:15.021782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.882 [2024-07-25 12:13:15.021788] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.882 [2024-07-25 12:13:15.021804] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.882 qpair failed and we were unable to recover it. 00:27:27.882 [2024-07-25 12:13:15.031659] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.882 [2024-07-25 12:13:15.031802] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.882 [2024-07-25 12:13:15.031820] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.882 [2024-07-25 12:13:15.031827] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.882 [2024-07-25 12:13:15.031836] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.882 [2024-07-25 12:13:15.031853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.882 qpair failed and we were unable to recover it. 00:27:27.882 [2024-07-25 12:13:15.041618] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.882 [2024-07-25 12:13:15.041761] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.882 [2024-07-25 12:13:15.041778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.882 [2024-07-25 12:13:15.041785] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.882 [2024-07-25 12:13:15.041791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.882 [2024-07-25 12:13:15.041808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.882 qpair failed and we were unable to recover it. 00:27:27.882 [2024-07-25 12:13:15.051713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.882 [2024-07-25 12:13:15.051857] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.882 [2024-07-25 12:13:15.051875] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.882 [2024-07-25 12:13:15.051881] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.882 [2024-07-25 12:13:15.051887] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.882 [2024-07-25 12:13:15.051904] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.882 qpair failed and we were unable to recover it. 00:27:27.882 [2024-07-25 12:13:15.061740] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.882 [2024-07-25 12:13:15.061878] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.883 [2024-07-25 12:13:15.061896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.883 [2024-07-25 12:13:15.061902] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.883 [2024-07-25 12:13:15.061908] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.883 [2024-07-25 12:13:15.061925] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.883 qpair failed and we were unable to recover it. 00:27:27.883 [2024-07-25 12:13:15.071774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.883 [2024-07-25 12:13:15.071914] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.883 [2024-07-25 12:13:15.071932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.883 [2024-07-25 12:13:15.071939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.883 [2024-07-25 12:13:15.071944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.883 [2024-07-25 12:13:15.071961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.883 qpair failed and we were unable to recover it. 00:27:27.883 [2024-07-25 12:13:15.081736] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.883 [2024-07-25 12:13:15.081880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.883 [2024-07-25 12:13:15.081897] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.883 [2024-07-25 12:13:15.081904] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.883 [2024-07-25 12:13:15.081910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.883 [2024-07-25 12:13:15.081927] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.883 qpair failed and we were unable to recover it. 00:27:27.883 [2024-07-25 12:13:15.091872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.883 [2024-07-25 12:13:15.092028] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.883 [2024-07-25 12:13:15.092050] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.883 [2024-07-25 12:13:15.092057] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.883 [2024-07-25 12:13:15.092064] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.883 [2024-07-25 12:13:15.092081] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.883 qpair failed and we were unable to recover it. 00:27:27.883 [2024-07-25 12:13:15.101868] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.883 [2024-07-25 12:13:15.102011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.883 [2024-07-25 12:13:15.102028] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.883 [2024-07-25 12:13:15.102035] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.883 [2024-07-25 12:13:15.102041] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.883 [2024-07-25 12:13:15.102064] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.883 qpair failed and we were unable to recover it. 00:27:27.883 [2024-07-25 12:13:15.111939] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.883 [2024-07-25 12:13:15.112104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.883 [2024-07-25 12:13:15.112122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.883 [2024-07-25 12:13:15.112128] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.883 [2024-07-25 12:13:15.112134] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.883 [2024-07-25 12:13:15.112151] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.883 qpair failed and we were unable to recover it. 00:27:27.883 [2024-07-25 12:13:15.121911] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:27.883 [2024-07-25 12:13:15.122065] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:27.883 [2024-07-25 12:13:15.122082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:27.883 [2024-07-25 12:13:15.122092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:27.883 [2024-07-25 12:13:15.122099] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:27.883 [2024-07-25 12:13:15.122114] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:27.883 qpair failed and we were unable to recover it. 00:27:28.145 [2024-07-25 12:13:15.132001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.145 [2024-07-25 12:13:15.132152] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.145 [2024-07-25 12:13:15.132170] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.145 [2024-07-25 12:13:15.132177] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.145 [2024-07-25 12:13:15.132183] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:28.145 [2024-07-25 12:13:15.132199] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:28.145 qpair failed and we were unable to recover it. 00:27:28.145 [2024-07-25 12:13:15.141965] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.145 [2024-07-25 12:13:15.142109] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.145 [2024-07-25 12:13:15.142127] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.145 [2024-07-25 12:13:15.142134] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.145 [2024-07-25 12:13:15.142140] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:28.145 [2024-07-25 12:13:15.142157] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:28.145 qpair failed and we were unable to recover it. 00:27:28.145 [2024-07-25 12:13:15.152040] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.145 [2024-07-25 12:13:15.152234] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.145 [2024-07-25 12:13:15.152264] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.145 [2024-07-25 12:13:15.152276] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.145 [2024-07-25 12:13:15.152285] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.145 [2024-07-25 12:13:15.152310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.145 qpair failed and we were unable to recover it. 00:27:28.145 [2024-07-25 12:13:15.162034] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.145 [2024-07-25 12:13:15.162184] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.145 [2024-07-25 12:13:15.162204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.145 [2024-07-25 12:13:15.162212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.145 [2024-07-25 12:13:15.162218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.145 [2024-07-25 12:13:15.162236] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.145 qpair failed and we were unable to recover it. 00:27:28.145 [2024-07-25 12:13:15.172046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.145 [2024-07-25 12:13:15.172198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.145 [2024-07-25 12:13:15.172218] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.145 [2024-07-25 12:13:15.172225] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.145 [2024-07-25 12:13:15.172231] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.145 [2024-07-25 12:13:15.172248] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.145 qpair failed and we were unable to recover it. 00:27:28.145 [2024-07-25 12:13:15.182083] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.145 [2024-07-25 12:13:15.182222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.145 [2024-07-25 12:13:15.182241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.145 [2024-07-25 12:13:15.182248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.145 [2024-07-25 12:13:15.182254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.145 [2024-07-25 12:13:15.182270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.145 qpair failed and we were unable to recover it. 00:27:28.145 [2024-07-25 12:13:15.192036] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.145 [2024-07-25 12:13:15.192182] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.145 [2024-07-25 12:13:15.192201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.145 [2024-07-25 12:13:15.192207] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.145 [2024-07-25 12:13:15.192213] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.145 [2024-07-25 12:13:15.192230] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.145 qpair failed and we were unable to recover it. 00:27:28.145 [2024-07-25 12:13:15.202139] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.145 [2024-07-25 12:13:15.202313] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.145 [2024-07-25 12:13:15.202332] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.145 [2024-07-25 12:13:15.202339] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.145 [2024-07-25 12:13:15.202345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.145 [2024-07-25 12:13:15.202361] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.145 qpair failed and we were unable to recover it. 00:27:28.145 [2024-07-25 12:13:15.212190] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.145 [2024-07-25 12:13:15.212335] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.145 [2024-07-25 12:13:15.212357] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.145 [2024-07-25 12:13:15.212364] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.145 [2024-07-25 12:13:15.212370] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.145 [2024-07-25 12:13:15.212387] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.145 qpair failed and we were unable to recover it. 00:27:28.145 [2024-07-25 12:13:15.222198] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.146 [2024-07-25 12:13:15.222342] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.146 [2024-07-25 12:13:15.222360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.146 [2024-07-25 12:13:15.222367] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.146 [2024-07-25 12:13:15.222373] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.146 [2024-07-25 12:13:15.222390] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.146 qpair failed and we were unable to recover it. 00:27:28.146 [2024-07-25 12:13:15.232254] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.146 [2024-07-25 12:13:15.232396] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.146 [2024-07-25 12:13:15.232415] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.146 [2024-07-25 12:13:15.232422] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.146 [2024-07-25 12:13:15.232428] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.146 [2024-07-25 12:13:15.232445] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.146 qpair failed and we were unable to recover it. 00:27:28.146 [2024-07-25 12:13:15.242291] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.146 [2024-07-25 12:13:15.242432] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.146 [2024-07-25 12:13:15.242451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.146 [2024-07-25 12:13:15.242458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.146 [2024-07-25 12:13:15.242464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.146 [2024-07-25 12:13:15.242480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.146 qpair failed and we were unable to recover it. 00:27:28.146 [2024-07-25 12:13:15.252287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.146 [2024-07-25 12:13:15.252433] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.146 [2024-07-25 12:13:15.252451] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.146 [2024-07-25 12:13:15.252458] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.146 [2024-07-25 12:13:15.252464] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.146 [2024-07-25 12:13:15.252480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.146 qpair failed and we were unable to recover it. 00:27:28.146 [2024-07-25 12:13:15.262326] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.146 [2024-07-25 12:13:15.262472] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.146 [2024-07-25 12:13:15.262491] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.146 [2024-07-25 12:13:15.262498] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.146 [2024-07-25 12:13:15.262504] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.146 [2024-07-25 12:13:15.262520] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.146 qpair failed and we were unable to recover it. 00:27:28.146 [2024-07-25 12:13:15.272355] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.146 [2024-07-25 12:13:15.272503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.146 [2024-07-25 12:13:15.272521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.146 [2024-07-25 12:13:15.272528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.146 [2024-07-25 12:13:15.272534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.146 [2024-07-25 12:13:15.272550] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.146 qpair failed and we were unable to recover it. 00:27:28.146 [2024-07-25 12:13:15.282399] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.146 [2024-07-25 12:13:15.282540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.146 [2024-07-25 12:13:15.282558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.146 [2024-07-25 12:13:15.282565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.146 [2024-07-25 12:13:15.282571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.146 [2024-07-25 12:13:15.282587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.146 qpair failed and we were unable to recover it. 00:27:28.146 [2024-07-25 12:13:15.292409] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.146 [2024-07-25 12:13:15.292553] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.146 [2024-07-25 12:13:15.292572] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.146 [2024-07-25 12:13:15.292579] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.146 [2024-07-25 12:13:15.292585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.146 [2024-07-25 12:13:15.292601] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.146 qpair failed and we were unable to recover it. 00:27:28.146 [2024-07-25 12:13:15.302417] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.146 [2024-07-25 12:13:15.302560] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.146 [2024-07-25 12:13:15.302581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.146 [2024-07-25 12:13:15.302588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.146 [2024-07-25 12:13:15.302595] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.146 [2024-07-25 12:13:15.302611] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.146 qpair failed and we were unable to recover it. 00:27:28.146 [2024-07-25 12:13:15.312477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.146 [2024-07-25 12:13:15.312622] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.146 [2024-07-25 12:13:15.312641] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.146 [2024-07-25 12:13:15.312648] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.146 [2024-07-25 12:13:15.312654] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.146 [2024-07-25 12:13:15.312671] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.146 qpair failed and we were unable to recover it. 00:27:28.146 [2024-07-25 12:13:15.322505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.146 [2024-07-25 12:13:15.322662] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.146 [2024-07-25 12:13:15.322680] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.146 [2024-07-25 12:13:15.322688] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.146 [2024-07-25 12:13:15.322694] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.146 [2024-07-25 12:13:15.322710] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.146 qpair failed and we were unable to recover it. 00:27:28.146 [2024-07-25 12:13:15.332532] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.146 [2024-07-25 12:13:15.332676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.146 [2024-07-25 12:13:15.332695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.146 [2024-07-25 12:13:15.332702] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.146 [2024-07-25 12:13:15.332707] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.146 [2024-07-25 12:13:15.332724] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.146 qpair failed and we were unable to recover it. 00:27:28.146 [2024-07-25 12:13:15.342561] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.146 [2024-07-25 12:13:15.342705] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.146 [2024-07-25 12:13:15.342723] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.146 [2024-07-25 12:13:15.342730] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.146 [2024-07-25 12:13:15.342736] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.146 [2024-07-25 12:13:15.342756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.146 qpair failed and we were unable to recover it. 00:27:28.146 [2024-07-25 12:13:15.352578] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.146 [2024-07-25 12:13:15.352720] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.147 [2024-07-25 12:13:15.352739] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.147 [2024-07-25 12:13:15.352746] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.147 [2024-07-25 12:13:15.352751] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.147 [2024-07-25 12:13:15.352768] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.147 qpair failed and we were unable to recover it. 00:27:28.147 [2024-07-25 12:13:15.362549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.147 [2024-07-25 12:13:15.362693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.147 [2024-07-25 12:13:15.362711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.147 [2024-07-25 12:13:15.362718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.147 [2024-07-25 12:13:15.362724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.147 [2024-07-25 12:13:15.362740] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.147 qpair failed and we were unable to recover it. 00:27:28.147 [2024-07-25 12:13:15.372649] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.147 [2024-07-25 12:13:15.372794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.147 [2024-07-25 12:13:15.372813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.147 [2024-07-25 12:13:15.372820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.147 [2024-07-25 12:13:15.372825] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.147 [2024-07-25 12:13:15.372842] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.147 qpair failed and we were unable to recover it. 00:27:28.147 [2024-07-25 12:13:15.382681] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.147 [2024-07-25 12:13:15.382825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.147 [2024-07-25 12:13:15.382844] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.147 [2024-07-25 12:13:15.382851] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.147 [2024-07-25 12:13:15.382857] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.147 [2024-07-25 12:13:15.382873] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.147 qpair failed and we were unable to recover it. 00:27:28.147 [2024-07-25 12:13:15.392682] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.147 [2024-07-25 12:13:15.392825] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.147 [2024-07-25 12:13:15.392847] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.147 [2024-07-25 12:13:15.392854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.147 [2024-07-25 12:13:15.392861] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.147 [2024-07-25 12:13:15.392878] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.147 qpair failed and we were unable to recover it. 00:27:28.408 [2024-07-25 12:13:15.402731] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.408 [2024-07-25 12:13:15.402877] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.408 [2024-07-25 12:13:15.402896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.408 [2024-07-25 12:13:15.402903] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.408 [2024-07-25 12:13:15.402910] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.408 [2024-07-25 12:13:15.402926] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.408 qpair failed and we were unable to recover it. 00:27:28.408 [2024-07-25 12:13:15.412759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.408 [2024-07-25 12:13:15.412901] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.408 [2024-07-25 12:13:15.412919] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.408 [2024-07-25 12:13:15.412926] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.408 [2024-07-25 12:13:15.412931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.408 [2024-07-25 12:13:15.412948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.408 qpair failed and we were unable to recover it. 00:27:28.408 [2024-07-25 12:13:15.422795] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.408 [2024-07-25 12:13:15.422938] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.408 [2024-07-25 12:13:15.422957] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.408 [2024-07-25 12:13:15.422964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.408 [2024-07-25 12:13:15.422970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.408 [2024-07-25 12:13:15.422986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.408 qpair failed and we were unable to recover it. 00:27:28.408 [2024-07-25 12:13:15.432811] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.408 [2024-07-25 12:13:15.432954] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.408 [2024-07-25 12:13:15.432973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.408 [2024-07-25 12:13:15.432979] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.408 [2024-07-25 12:13:15.432985] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.408 [2024-07-25 12:13:15.433009] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.408 qpair failed and we were unable to recover it. 00:27:28.408 [2024-07-25 12:13:15.442824] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.408 [2024-07-25 12:13:15.442971] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.408 [2024-07-25 12:13:15.442990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.408 [2024-07-25 12:13:15.442997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.408 [2024-07-25 12:13:15.443003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.408 [2024-07-25 12:13:15.443020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.408 qpair failed and we were unable to recover it. 00:27:28.408 [2024-07-25 12:13:15.452827] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.408 [2024-07-25 12:13:15.452973] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.408 [2024-07-25 12:13:15.452991] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.408 [2024-07-25 12:13:15.452998] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.408 [2024-07-25 12:13:15.453004] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.408 [2024-07-25 12:13:15.453020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.408 qpair failed and we were unable to recover it. 00:27:28.409 [2024-07-25 12:13:15.462897] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.409 [2024-07-25 12:13:15.463039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.409 [2024-07-25 12:13:15.463063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.409 [2024-07-25 12:13:15.463070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.409 [2024-07-25 12:13:15.463076] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.409 [2024-07-25 12:13:15.463093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.409 qpair failed and we were unable to recover it. 00:27:28.409 [2024-07-25 12:13:15.472912] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.409 [2024-07-25 12:13:15.473058] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.409 [2024-07-25 12:13:15.473076] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.409 [2024-07-25 12:13:15.473083] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.409 [2024-07-25 12:13:15.473089] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.409 [2024-07-25 12:13:15.473105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.409 qpair failed and we were unable to recover it. 00:27:28.409 [2024-07-25 12:13:15.482938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.409 [2024-07-25 12:13:15.483100] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.409 [2024-07-25 12:13:15.483122] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.409 [2024-07-25 12:13:15.483129] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.409 [2024-07-25 12:13:15.483135] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.409 [2024-07-25 12:13:15.483152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.409 qpair failed and we were unable to recover it. 00:27:28.409 [2024-07-25 12:13:15.492985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.409 [2024-07-25 12:13:15.493139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.409 [2024-07-25 12:13:15.493158] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.409 [2024-07-25 12:13:15.493165] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.409 [2024-07-25 12:13:15.493171] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.409 [2024-07-25 12:13:15.493187] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.409 qpair failed and we were unable to recover it. 00:27:28.409 [2024-07-25 12:13:15.503016] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.409 [2024-07-25 12:13:15.503161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.409 [2024-07-25 12:13:15.503179] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.409 [2024-07-25 12:13:15.503186] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.409 [2024-07-25 12:13:15.503192] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.409 [2024-07-25 12:13:15.503209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.409 qpair failed and we were unable to recover it. 00:27:28.409 [2024-07-25 12:13:15.513048] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.409 [2024-07-25 12:13:15.513191] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.409 [2024-07-25 12:13:15.513210] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.409 [2024-07-25 12:13:15.513217] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.409 [2024-07-25 12:13:15.513223] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.409 [2024-07-25 12:13:15.513240] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.409 qpair failed and we were unable to recover it. 00:27:28.409 [2024-07-25 12:13:15.523088] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.409 [2024-07-25 12:13:15.523233] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.409 [2024-07-25 12:13:15.523251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.409 [2024-07-25 12:13:15.523258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.409 [2024-07-25 12:13:15.523268] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.409 [2024-07-25 12:13:15.523285] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.409 qpair failed and we were unable to recover it. 00:27:28.409 [2024-07-25 12:13:15.533102] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.409 [2024-07-25 12:13:15.533247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.409 [2024-07-25 12:13:15.533265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.409 [2024-07-25 12:13:15.533272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.409 [2024-07-25 12:13:15.533278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.409 [2024-07-25 12:13:15.533295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.409 qpair failed and we were unable to recover it. 00:27:28.409 [2024-07-25 12:13:15.543128] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.409 [2024-07-25 12:13:15.543269] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.409 [2024-07-25 12:13:15.543288] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.409 [2024-07-25 12:13:15.543295] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.409 [2024-07-25 12:13:15.543301] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.409 [2024-07-25 12:13:15.543318] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.409 qpair failed and we were unable to recover it. 00:27:28.409 [2024-07-25 12:13:15.553207] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.409 [2024-07-25 12:13:15.553369] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.409 [2024-07-25 12:13:15.553387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.409 [2024-07-25 12:13:15.553394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.409 [2024-07-25 12:13:15.553400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.409 [2024-07-25 12:13:15.553416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.409 qpair failed and we were unable to recover it. 00:27:28.409 [2024-07-25 12:13:15.563202] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.409 [2024-07-25 12:13:15.563347] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.409 [2024-07-25 12:13:15.563365] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.409 [2024-07-25 12:13:15.563373] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.409 [2024-07-25 12:13:15.563378] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.409 [2024-07-25 12:13:15.563395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.409 qpair failed and we were unable to recover it. 00:27:28.409 [2024-07-25 12:13:15.573219] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.409 [2024-07-25 12:13:15.573366] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.409 [2024-07-25 12:13:15.573384] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.409 [2024-07-25 12:13:15.573391] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.409 [2024-07-25 12:13:15.573397] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.409 [2024-07-25 12:13:15.573414] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.409 qpair failed and we were unable to recover it. 00:27:28.409 [2024-07-25 12:13:15.583243] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.409 [2024-07-25 12:13:15.583388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.409 [2024-07-25 12:13:15.583406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.409 [2024-07-25 12:13:15.583413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.409 [2024-07-25 12:13:15.583419] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.409 [2024-07-25 12:13:15.583436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.409 qpair failed and we were unable to recover it. 00:27:28.409 [2024-07-25 12:13:15.593271] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.409 [2024-07-25 12:13:15.593424] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.410 [2024-07-25 12:13:15.593442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.410 [2024-07-25 12:13:15.593449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.410 [2024-07-25 12:13:15.593455] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.410 [2024-07-25 12:13:15.593471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.410 qpair failed and we were unable to recover it. 00:27:28.410 [2024-07-25 12:13:15.603281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.410 [2024-07-25 12:13:15.603426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.410 [2024-07-25 12:13:15.603444] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.410 [2024-07-25 12:13:15.603451] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.410 [2024-07-25 12:13:15.603457] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.410 [2024-07-25 12:13:15.603474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.410 qpair failed and we were unable to recover it. 00:27:28.410 [2024-07-25 12:13:15.613328] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.410 [2024-07-25 12:13:15.613471] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.410 [2024-07-25 12:13:15.613490] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.410 [2024-07-25 12:13:15.613496] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.410 [2024-07-25 12:13:15.613507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.410 [2024-07-25 12:13:15.613523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.410 qpair failed and we were unable to recover it. 00:27:28.410 [2024-07-25 12:13:15.623356] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.410 [2024-07-25 12:13:15.623501] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.410 [2024-07-25 12:13:15.623520] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.410 [2024-07-25 12:13:15.623527] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.410 [2024-07-25 12:13:15.623533] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.410 [2024-07-25 12:13:15.623549] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.410 qpair failed and we were unable to recover it. 00:27:28.410 [2024-07-25 12:13:15.633390] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.410 [2024-07-25 12:13:15.633536] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.410 [2024-07-25 12:13:15.633555] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.410 [2024-07-25 12:13:15.633562] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.410 [2024-07-25 12:13:15.633568] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.410 [2024-07-25 12:13:15.633584] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.410 qpair failed and we were unable to recover it. 00:27:28.410 [2024-07-25 12:13:15.643412] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.410 [2024-07-25 12:13:15.643556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.410 [2024-07-25 12:13:15.643575] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.410 [2024-07-25 12:13:15.643582] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.410 [2024-07-25 12:13:15.643587] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.410 [2024-07-25 12:13:15.643604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.410 qpair failed and we were unable to recover it. 00:27:28.410 [2024-07-25 12:13:15.653336] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.410 [2024-07-25 12:13:15.653480] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.410 [2024-07-25 12:13:15.653499] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.410 [2024-07-25 12:13:15.653506] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.410 [2024-07-25 12:13:15.653512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.410 [2024-07-25 12:13:15.653528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.410 qpair failed and we were unable to recover it. 00:27:28.671 [2024-07-25 12:13:15.663454] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.671 [2024-07-25 12:13:15.663600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.671 [2024-07-25 12:13:15.663620] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.671 [2024-07-25 12:13:15.663627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.671 [2024-07-25 12:13:15.663632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.671 [2024-07-25 12:13:15.663649] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-07-25 12:13:15.673480] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.671 [2024-07-25 12:13:15.673619] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.671 [2024-07-25 12:13:15.673637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.671 [2024-07-25 12:13:15.673644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.671 [2024-07-25 12:13:15.673650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.671 [2024-07-25 12:13:15.673666] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-07-25 12:13:15.683524] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.671 [2024-07-25 12:13:15.683664] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.671 [2024-07-25 12:13:15.683682] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.671 [2024-07-25 12:13:15.683689] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.671 [2024-07-25 12:13:15.683695] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.671 [2024-07-25 12:13:15.683711] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-07-25 12:13:15.693553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.671 [2024-07-25 12:13:15.693697] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.671 [2024-07-25 12:13:15.693716] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.671 [2024-07-25 12:13:15.693723] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.671 [2024-07-25 12:13:15.693729] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.671 [2024-07-25 12:13:15.693745] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-07-25 12:13:15.703572] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.671 [2024-07-25 12:13:15.703712] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.671 [2024-07-25 12:13:15.703731] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.671 [2024-07-25 12:13:15.703738] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.671 [2024-07-25 12:13:15.703747] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.671 [2024-07-25 12:13:15.703763] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.671 [2024-07-25 12:13:15.713600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.671 [2024-07-25 12:13:15.713742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.671 [2024-07-25 12:13:15.713761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.671 [2024-07-25 12:13:15.713768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.671 [2024-07-25 12:13:15.713774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.671 [2024-07-25 12:13:15.713790] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.671 qpair failed and we were unable to recover it. 00:27:28.672 [2024-07-25 12:13:15.723640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.672 [2024-07-25 12:13:15.723782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.672 [2024-07-25 12:13:15.723801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.672 [2024-07-25 12:13:15.723808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.672 [2024-07-25 12:13:15.723814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.672 [2024-07-25 12:13:15.723830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-07-25 12:13:15.733605] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.672 [2024-07-25 12:13:15.733784] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.672 [2024-07-25 12:13:15.733801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.672 [2024-07-25 12:13:15.733808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.672 [2024-07-25 12:13:15.733814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.672 [2024-07-25 12:13:15.733830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-07-25 12:13:15.743621] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.672 [2024-07-25 12:13:15.743765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.672 [2024-07-25 12:13:15.743783] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.672 [2024-07-25 12:13:15.743791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.672 [2024-07-25 12:13:15.743796] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.672 [2024-07-25 12:13:15.743813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-07-25 12:13:15.753721] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.672 [2024-07-25 12:13:15.753861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.672 [2024-07-25 12:13:15.753880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.672 [2024-07-25 12:13:15.753887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.672 [2024-07-25 12:13:15.753893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.672 [2024-07-25 12:13:15.753909] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-07-25 12:13:15.763759] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.672 [2024-07-25 12:13:15.763902] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.672 [2024-07-25 12:13:15.763920] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.672 [2024-07-25 12:13:15.763927] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.672 [2024-07-25 12:13:15.763933] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.672 [2024-07-25 12:13:15.763949] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-07-25 12:13:15.773774] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.672 [2024-07-25 12:13:15.773917] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.672 [2024-07-25 12:13:15.773936] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.672 [2024-07-25 12:13:15.773943] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.672 [2024-07-25 12:13:15.773948] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.672 [2024-07-25 12:13:15.773965] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-07-25 12:13:15.783803] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.672 [2024-07-25 12:13:15.783941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.672 [2024-07-25 12:13:15.783960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.672 [2024-07-25 12:13:15.783966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.672 [2024-07-25 12:13:15.783972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.672 [2024-07-25 12:13:15.783989] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-07-25 12:13:15.793833] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.672 [2024-07-25 12:13:15.793979] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.672 [2024-07-25 12:13:15.793997] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.672 [2024-07-25 12:13:15.794007] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.672 [2024-07-25 12:13:15.794013] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.672 [2024-07-25 12:13:15.794030] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-07-25 12:13:15.803864] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.672 [2024-07-25 12:13:15.804008] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.672 [2024-07-25 12:13:15.804027] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.672 [2024-07-25 12:13:15.804034] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.672 [2024-07-25 12:13:15.804040] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.672 [2024-07-25 12:13:15.804063] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-07-25 12:13:15.813903] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.672 [2024-07-25 12:13:15.814071] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.672 [2024-07-25 12:13:15.814089] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.672 [2024-07-25 12:13:15.814096] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.672 [2024-07-25 12:13:15.814102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.672 [2024-07-25 12:13:15.814119] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-07-25 12:13:15.823918] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.672 [2024-07-25 12:13:15.824067] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.672 [2024-07-25 12:13:15.824085] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.672 [2024-07-25 12:13:15.824092] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.672 [2024-07-25 12:13:15.824098] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.672 [2024-07-25 12:13:15.824115] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-07-25 12:13:15.833922] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.672 [2024-07-25 12:13:15.834074] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.672 [2024-07-25 12:13:15.834093] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.672 [2024-07-25 12:13:15.834100] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.672 [2024-07-25 12:13:15.834106] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.672 [2024-07-25 12:13:15.834123] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-07-25 12:13:15.843985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.672 [2024-07-25 12:13:15.844139] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.672 [2024-07-25 12:13:15.844157] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.672 [2024-07-25 12:13:15.844164] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.672 [2024-07-25 12:13:15.844170] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.672 [2024-07-25 12:13:15.844186] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.672 qpair failed and we were unable to recover it. 00:27:28.672 [2024-07-25 12:13:15.854000] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.673 [2024-07-25 12:13:15.854153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.673 [2024-07-25 12:13:15.854169] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.673 [2024-07-25 12:13:15.854176] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.673 [2024-07-25 12:13:15.854182] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.673 [2024-07-25 12:13:15.854198] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-07-25 12:13:15.863949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.673 [2024-07-25 12:13:15.864110] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.673 [2024-07-25 12:13:15.864129] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.673 [2024-07-25 12:13:15.864136] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.673 [2024-07-25 12:13:15.864142] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.673 [2024-07-25 12:13:15.864159] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-07-25 12:13:15.874056] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.673 [2024-07-25 12:13:15.874198] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.673 [2024-07-25 12:13:15.874217] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.673 [2024-07-25 12:13:15.874224] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.673 [2024-07-25 12:13:15.874230] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.673 [2024-07-25 12:13:15.874246] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-07-25 12:13:15.884101] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.673 [2024-07-25 12:13:15.884244] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.673 [2024-07-25 12:13:15.884262] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.673 [2024-07-25 12:13:15.884273] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.673 [2024-07-25 12:13:15.884279] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.673 [2024-07-25 12:13:15.884295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-07-25 12:13:15.894112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.673 [2024-07-25 12:13:15.894256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.673 [2024-07-25 12:13:15.894274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.673 [2024-07-25 12:13:15.894281] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.673 [2024-07-25 12:13:15.894287] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.673 [2024-07-25 12:13:15.894304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-07-25 12:13:15.904162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.673 [2024-07-25 12:13:15.904302] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.673 [2024-07-25 12:13:15.904321] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.673 [2024-07-25 12:13:15.904328] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.673 [2024-07-25 12:13:15.904334] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.673 [2024-07-25 12:13:15.904351] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.673 [2024-07-25 12:13:15.914186] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.673 [2024-07-25 12:13:15.914330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.673 [2024-07-25 12:13:15.914349] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.673 [2024-07-25 12:13:15.914356] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.673 [2024-07-25 12:13:15.914362] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.673 [2024-07-25 12:13:15.914379] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.673 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-25 12:13:15.924205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.934 [2024-07-25 12:13:15.924374] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.934 [2024-07-25 12:13:15.924392] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.934 [2024-07-25 12:13:15.924400] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.934 [2024-07-25 12:13:15.924407] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.934 [2024-07-25 12:13:15.924423] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-25 12:13:15.934205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.934 [2024-07-25 12:13:15.934351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.934 [2024-07-25 12:13:15.934370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.934 [2024-07-25 12:13:15.934377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.934 [2024-07-25 12:13:15.934385] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.934 [2024-07-25 12:13:15.934402] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-25 12:13:15.944234] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.934 [2024-07-25 12:13:15.944380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.934 [2024-07-25 12:13:15.944398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.934 [2024-07-25 12:13:15.944405] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.934 [2024-07-25 12:13:15.944411] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.934 [2024-07-25 12:13:15.944427] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-25 12:13:15.954282] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.934 [2024-07-25 12:13:15.954426] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.934 [2024-07-25 12:13:15.954445] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.934 [2024-07-25 12:13:15.954452] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.934 [2024-07-25 12:13:15.954458] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.934 [2024-07-25 12:13:15.954474] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.934 qpair failed and we were unable to recover it. 00:27:28.934 [2024-07-25 12:13:15.964309] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.935 [2024-07-25 12:13:15.964453] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.935 [2024-07-25 12:13:15.964472] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.935 [2024-07-25 12:13:15.964479] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.935 [2024-07-25 12:13:15.964485] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.935 [2024-07-25 12:13:15.964502] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-25 12:13:15.974318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.935 [2024-07-25 12:13:15.974470] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.935 [2024-07-25 12:13:15.974488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.935 [2024-07-25 12:13:15.974499] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.935 [2024-07-25 12:13:15.974505] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.935 [2024-07-25 12:13:15.974521] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-25 12:13:15.984396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.935 [2024-07-25 12:13:15.984746] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.935 [2024-07-25 12:13:15.984764] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.935 [2024-07-25 12:13:15.984771] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.935 [2024-07-25 12:13:15.984777] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.935 [2024-07-25 12:13:15.984793] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-25 12:13:15.994407] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.935 [2024-07-25 12:13:15.994564] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.935 [2024-07-25 12:13:15.994583] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.935 [2024-07-25 12:13:15.994590] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.935 [2024-07-25 12:13:15.994596] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.935 [2024-07-25 12:13:15.994612] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-25 12:13:16.004529] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.935 [2024-07-25 12:13:16.004680] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.935 [2024-07-25 12:13:16.004698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.935 [2024-07-25 12:13:16.004705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.935 [2024-07-25 12:13:16.004711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.935 [2024-07-25 12:13:16.004727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-25 12:13:16.014505] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.935 [2024-07-25 12:13:16.014654] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.935 [2024-07-25 12:13:16.014673] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.935 [2024-07-25 12:13:16.014680] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.935 [2024-07-25 12:13:16.014686] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.935 [2024-07-25 12:13:16.014702] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-25 12:13:16.024544] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.935 [2024-07-25 12:13:16.024689] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.935 [2024-07-25 12:13:16.024708] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.935 [2024-07-25 12:13:16.024715] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.935 [2024-07-25 12:13:16.024721] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.935 [2024-07-25 12:13:16.024738] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-25 12:13:16.034569] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.935 [2024-07-25 12:13:16.034707] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.935 [2024-07-25 12:13:16.034726] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.935 [2024-07-25 12:13:16.034734] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.935 [2024-07-25 12:13:16.034740] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.935 [2024-07-25 12:13:16.034756] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-25 12:13:16.044549] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.935 [2024-07-25 12:13:16.044699] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.935 [2024-07-25 12:13:16.044717] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.935 [2024-07-25 12:13:16.044724] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.935 [2024-07-25 12:13:16.044730] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.935 [2024-07-25 12:13:16.044747] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-25 12:13:16.054586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.935 [2024-07-25 12:13:16.054733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.935 [2024-07-25 12:13:16.054752] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.935 [2024-07-25 12:13:16.054759] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.935 [2024-07-25 12:13:16.054767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.935 [2024-07-25 12:13:16.054784] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-25 12:13:16.064553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.935 [2024-07-25 12:13:16.064698] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.935 [2024-07-25 12:13:16.064720] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.935 [2024-07-25 12:13:16.064727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.935 [2024-07-25 12:13:16.064733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.935 [2024-07-25 12:13:16.064750] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-25 12:13:16.074640] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.935 [2024-07-25 12:13:16.074786] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.935 [2024-07-25 12:13:16.074805] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.935 [2024-07-25 12:13:16.074812] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.935 [2024-07-25 12:13:16.074818] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.935 [2024-07-25 12:13:16.074835] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-25 12:13:16.084650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.935 [2024-07-25 12:13:16.084797] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.935 [2024-07-25 12:13:16.084815] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.935 [2024-07-25 12:13:16.084822] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.935 [2024-07-25 12:13:16.084829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.935 [2024-07-25 12:13:16.084845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.935 qpair failed and we were unable to recover it. 00:27:28.935 [2024-07-25 12:13:16.094631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.935 [2024-07-25 12:13:16.094777] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.936 [2024-07-25 12:13:16.094796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.936 [2024-07-25 12:13:16.094803] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.936 [2024-07-25 12:13:16.094809] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.936 [2024-07-25 12:13:16.094826] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-25 12:13:16.104667] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.936 [2024-07-25 12:13:16.104814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.936 [2024-07-25 12:13:16.104833] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.936 [2024-07-25 12:13:16.104839] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.936 [2024-07-25 12:13:16.104845] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.936 [2024-07-25 12:13:16.104866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-25 12:13:16.114746] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.936 [2024-07-25 12:13:16.114891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.936 [2024-07-25 12:13:16.114910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.936 [2024-07-25 12:13:16.114917] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.936 [2024-07-25 12:13:16.114922] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.936 [2024-07-25 12:13:16.114938] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-25 12:13:16.124798] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.936 [2024-07-25 12:13:16.124946] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.936 [2024-07-25 12:13:16.124964] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.936 [2024-07-25 12:13:16.124971] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.936 [2024-07-25 12:13:16.124977] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.936 [2024-07-25 12:13:16.124993] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-25 12:13:16.134741] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.936 [2024-07-25 12:13:16.134898] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.936 [2024-07-25 12:13:16.134917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.936 [2024-07-25 12:13:16.134923] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.936 [2024-07-25 12:13:16.134929] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.936 [2024-07-25 12:13:16.134946] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-25 12:13:16.144830] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.936 [2024-07-25 12:13:16.144975] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.936 [2024-07-25 12:13:16.144994] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.936 [2024-07-25 12:13:16.145000] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.936 [2024-07-25 12:13:16.145006] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.936 [2024-07-25 12:13:16.145022] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-25 12:13:16.154880] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.936 [2024-07-25 12:13:16.155024] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.936 [2024-07-25 12:13:16.155056] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.936 [2024-07-25 12:13:16.155064] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.936 [2024-07-25 12:13:16.155070] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.936 [2024-07-25 12:13:16.155088] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-25 12:13:16.164915] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.936 [2024-07-25 12:13:16.165073] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.936 [2024-07-25 12:13:16.165092] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.936 [2024-07-25 12:13:16.165099] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.936 [2024-07-25 12:13:16.165105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.936 [2024-07-25 12:13:16.165122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.936 qpair failed and we were unable to recover it. 00:27:28.936 [2024-07-25 12:13:16.174926] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:28.936 [2024-07-25 12:13:16.175084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:28.936 [2024-07-25 12:13:16.175103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:28.936 [2024-07-25 12:13:16.175110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:28.936 [2024-07-25 12:13:16.175116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:28.936 [2024-07-25 12:13:16.175133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:28.936 qpair failed and we were unable to recover it. 00:27:29.198 [2024-07-25 12:13:16.184969] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.198 [2024-07-25 12:13:16.185124] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.198 [2024-07-25 12:13:16.185143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.198 [2024-07-25 12:13:16.185151] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.198 [2024-07-25 12:13:16.185157] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.198 [2024-07-25 12:13:16.185174] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.198 qpair failed and we were unable to recover it. 00:27:29.198 [2024-07-25 12:13:16.195007] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.198 [2024-07-25 12:13:16.195163] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.198 [2024-07-25 12:13:16.195182] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.198 [2024-07-25 12:13:16.195189] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.198 [2024-07-25 12:13:16.195195] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.198 [2024-07-25 12:13:16.195219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.198 qpair failed and we were unable to recover it. 00:27:29.198 [2024-07-25 12:13:16.205033] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.198 [2024-07-25 12:13:16.205187] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.198 [2024-07-25 12:13:16.205206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.198 [2024-07-25 12:13:16.205213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.198 [2024-07-25 12:13:16.205218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.198 [2024-07-25 12:13:16.205235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.198 qpair failed and we were unable to recover it. 00:27:29.198 [2024-07-25 12:13:16.215046] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.198 [2024-07-25 12:13:16.215185] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.198 [2024-07-25 12:13:16.215204] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.198 [2024-07-25 12:13:16.215211] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.198 [2024-07-25 12:13:16.215218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.198 [2024-07-25 12:13:16.215235] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.198 qpair failed and we were unable to recover it. 00:27:29.198 [2024-07-25 12:13:16.225095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.198 [2024-07-25 12:13:16.225242] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.198 [2024-07-25 12:13:16.225260] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.198 [2024-07-25 12:13:16.225267] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.198 [2024-07-25 12:13:16.225273] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.198 [2024-07-25 12:13:16.225290] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.198 qpair failed and we were unable to recover it. 00:27:29.198 [2024-07-25 12:13:16.235098] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.198 [2024-07-25 12:13:16.235247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.198 [2024-07-25 12:13:16.235265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.198 [2024-07-25 12:13:16.235271] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.198 [2024-07-25 12:13:16.235277] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.198 [2024-07-25 12:13:16.235294] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.198 qpair failed and we were unable to recover it. 00:27:29.198 [2024-07-25 12:13:16.245152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.198 [2024-07-25 12:13:16.245295] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.198 [2024-07-25 12:13:16.245317] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.198 [2024-07-25 12:13:16.245324] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.198 [2024-07-25 12:13:16.245330] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.198 [2024-07-25 12:13:16.245347] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.198 qpair failed and we were unable to recover it. 00:27:29.198 [2024-07-25 12:13:16.255174] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.198 [2024-07-25 12:13:16.255320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.198 [2024-07-25 12:13:16.255339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.198 [2024-07-25 12:13:16.255346] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.198 [2024-07-25 12:13:16.255352] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.198 [2024-07-25 12:13:16.255369] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.198 qpair failed and we were unable to recover it. 00:27:29.198 [2024-07-25 12:13:16.265211] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.198 [2024-07-25 12:13:16.265368] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.198 [2024-07-25 12:13:16.265387] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.198 [2024-07-25 12:13:16.265394] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.198 [2024-07-25 12:13:16.265400] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.198 [2024-07-25 12:13:16.265416] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.198 qpair failed and we were unable to recover it. 00:27:29.198 [2024-07-25 12:13:16.275179] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.198 [2024-07-25 12:13:16.275322] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.198 [2024-07-25 12:13:16.275340] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.198 [2024-07-25 12:13:16.275347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.198 [2024-07-25 12:13:16.275353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.198 [2024-07-25 12:13:16.275370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.198 qpair failed and we were unable to recover it. 00:27:29.198 [2024-07-25 12:13:16.285277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.198 [2024-07-25 12:13:16.285427] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.198 [2024-07-25 12:13:16.285446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.198 [2024-07-25 12:13:16.285453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.198 [2024-07-25 12:13:16.285459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.198 [2024-07-25 12:13:16.285478] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.199 qpair failed and we were unable to recover it. 00:27:29.199 [2024-07-25 12:13:16.295266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.199 [2024-07-25 12:13:16.295408] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.199 [2024-07-25 12:13:16.295427] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.199 [2024-07-25 12:13:16.295434] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.199 [2024-07-25 12:13:16.295440] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.199 [2024-07-25 12:13:16.295456] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.199 qpair failed and we were unable to recover it. 00:27:29.199 [2024-07-25 12:13:16.305330] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.199 [2024-07-25 12:13:16.305475] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.199 [2024-07-25 12:13:16.305494] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.199 [2024-07-25 12:13:16.305501] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.199 [2024-07-25 12:13:16.305507] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.199 [2024-07-25 12:13:16.305523] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.199 qpair failed and we were unable to recover it. 00:27:29.199 [2024-07-25 12:13:16.315340] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.199 [2024-07-25 12:13:16.315484] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.199 [2024-07-25 12:13:16.315504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.199 [2024-07-25 12:13:16.315510] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.199 [2024-07-25 12:13:16.315516] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.199 [2024-07-25 12:13:16.315533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.199 qpair failed and we were unable to recover it. 00:27:29.199 [2024-07-25 12:13:16.325360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.199 [2024-07-25 12:13:16.325537] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.199 [2024-07-25 12:13:16.325556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.199 [2024-07-25 12:13:16.325563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.199 [2024-07-25 12:13:16.325569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.199 [2024-07-25 12:13:16.325585] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.199 qpair failed and we were unable to recover it. 00:27:29.199 [2024-07-25 12:13:16.335339] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.199 [2024-07-25 12:13:16.335483] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.199 [2024-07-25 12:13:16.335506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.199 [2024-07-25 12:13:16.335513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.199 [2024-07-25 12:13:16.335518] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.199 [2024-07-25 12:13:16.335535] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.199 qpair failed and we were unable to recover it. 00:27:29.199 [2024-07-25 12:13:16.345457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.199 [2024-07-25 12:13:16.345611] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.199 [2024-07-25 12:13:16.345631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.199 [2024-07-25 12:13:16.345637] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.199 [2024-07-25 12:13:16.345643] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.199 [2024-07-25 12:13:16.345660] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.199 qpair failed and we were unable to recover it. 00:27:29.199 [2024-07-25 12:13:16.355401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.199 [2024-07-25 12:13:16.355547] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.199 [2024-07-25 12:13:16.355566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.199 [2024-07-25 12:13:16.355573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.199 [2024-07-25 12:13:16.355579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.199 [2024-07-25 12:13:16.355595] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.199 qpair failed and we were unable to recover it. 00:27:29.199 [2024-07-25 12:13:16.365431] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.199 [2024-07-25 12:13:16.365588] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.199 [2024-07-25 12:13:16.365606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.199 [2024-07-25 12:13:16.365613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.199 [2024-07-25 12:13:16.365619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.199 [2024-07-25 12:13:16.365635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.199 qpair failed and we were unable to recover it. 00:27:29.199 [2024-07-25 12:13:16.375521] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.199 [2024-07-25 12:13:16.375668] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.199 [2024-07-25 12:13:16.375687] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.199 [2024-07-25 12:13:16.375693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.199 [2024-07-25 12:13:16.375703] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.199 [2024-07-25 12:13:16.375719] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.199 qpair failed and we were unable to recover it. 00:27:29.199 [2024-07-25 12:13:16.385557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.199 [2024-07-25 12:13:16.385701] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.199 [2024-07-25 12:13:16.385719] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.199 [2024-07-25 12:13:16.385727] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.199 [2024-07-25 12:13:16.385733] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.199 [2024-07-25 12:13:16.385749] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.199 qpair failed and we were unable to recover it. 00:27:29.199 [2024-07-25 12:13:16.395582] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.199 [2024-07-25 12:13:16.395721] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.199 [2024-07-25 12:13:16.395740] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.199 [2024-07-25 12:13:16.395747] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.199 [2024-07-25 12:13:16.395753] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.199 [2024-07-25 12:13:16.395769] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.199 qpair failed and we were unable to recover it. 00:27:29.199 [2024-07-25 12:13:16.405622] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.199 [2024-07-25 12:13:16.405765] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.199 [2024-07-25 12:13:16.405784] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.199 [2024-07-25 12:13:16.405791] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.199 [2024-07-25 12:13:16.405797] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.199 [2024-07-25 12:13:16.405813] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.199 qpair failed and we were unable to recover it. 00:27:29.199 [2024-07-25 12:13:16.415626] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.199 [2024-07-25 12:13:16.415769] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.199 [2024-07-25 12:13:16.415788] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.199 [2024-07-25 12:13:16.415795] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.200 [2024-07-25 12:13:16.415801] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.200 [2024-07-25 12:13:16.415817] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.200 qpair failed and we were unable to recover it. 00:27:29.200 [2024-07-25 12:13:16.425671] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.200 [2024-07-25 12:13:16.425820] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.200 [2024-07-25 12:13:16.425839] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.200 [2024-07-25 12:13:16.425846] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.200 [2024-07-25 12:13:16.425852] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.200 [2024-07-25 12:13:16.425868] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.200 qpair failed and we were unable to recover it. 00:27:29.200 [2024-07-25 12:13:16.435631] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.200 [2024-07-25 12:13:16.435776] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.200 [2024-07-25 12:13:16.435795] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.200 [2024-07-25 12:13:16.435802] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.200 [2024-07-25 12:13:16.435807] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.200 [2024-07-25 12:13:16.435825] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.200 qpair failed and we were unable to recover it. 00:27:29.200 [2024-07-25 12:13:16.445706] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.200 [2024-07-25 12:13:16.445887] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.200 [2024-07-25 12:13:16.445906] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.200 [2024-07-25 12:13:16.445913] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.200 [2024-07-25 12:13:16.445919] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.200 [2024-07-25 12:13:16.445935] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.200 qpair failed and we were unable to recover it. 00:27:29.460 [2024-07-25 12:13:16.455732] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.460 [2024-07-25 12:13:16.455879] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.461 [2024-07-25 12:13:16.455898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.461 [2024-07-25 12:13:16.455905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.461 [2024-07-25 12:13:16.455912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.461 [2024-07-25 12:13:16.455928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.461 qpair failed and we were unable to recover it. 00:27:29.461 [2024-07-25 12:13:16.465781] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.461 [2024-07-25 12:13:16.465933] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.461 [2024-07-25 12:13:16.465952] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.461 [2024-07-25 12:13:16.465959] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.461 [2024-07-25 12:13:16.465970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.461 [2024-07-25 12:13:16.465986] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.461 qpair failed and we were unable to recover it. 00:27:29.461 [2024-07-25 12:13:16.475820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.461 [2024-07-25 12:13:16.475965] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.461 [2024-07-25 12:13:16.475983] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.461 [2024-07-25 12:13:16.475990] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.461 [2024-07-25 12:13:16.475996] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.461 [2024-07-25 12:13:16.476012] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.461 qpair failed and we were unable to recover it. 00:27:29.461 [2024-07-25 12:13:16.485858] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.461 [2024-07-25 12:13:16.486006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.461 [2024-07-25 12:13:16.486024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.461 [2024-07-25 12:13:16.486031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.461 [2024-07-25 12:13:16.486037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.461 [2024-07-25 12:13:16.486059] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.461 qpair failed and we were unable to recover it. 00:27:29.461 [2024-07-25 12:13:16.495872] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.461 [2024-07-25 12:13:16.496021] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.461 [2024-07-25 12:13:16.496039] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.461 [2024-07-25 12:13:16.496054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.461 [2024-07-25 12:13:16.496060] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.461 [2024-07-25 12:13:16.496077] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.461 qpair failed and we were unable to recover it. 00:27:29.461 [2024-07-25 12:13:16.505832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.461 [2024-07-25 12:13:16.505972] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.461 [2024-07-25 12:13:16.505990] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.461 [2024-07-25 12:13:16.505997] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.461 [2024-07-25 12:13:16.506003] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.461 [2024-07-25 12:13:16.506020] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.461 qpair failed and we were unable to recover it. 00:27:29.461 [2024-07-25 12:13:16.515932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.461 [2024-07-25 12:13:16.516086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.461 [2024-07-25 12:13:16.516105] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.461 [2024-07-25 12:13:16.516111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.461 [2024-07-25 12:13:16.516117] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.461 [2024-07-25 12:13:16.516133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.461 qpair failed and we were unable to recover it. 00:27:29.461 [2024-07-25 12:13:16.525975] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.461 [2024-07-25 12:13:16.526125] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.461 [2024-07-25 12:13:16.526143] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.461 [2024-07-25 12:13:16.526150] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.461 [2024-07-25 12:13:16.526156] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.461 [2024-07-25 12:13:16.526172] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.461 qpair failed and we were unable to recover it. 00:27:29.461 [2024-07-25 12:13:16.535991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.461 [2024-07-25 12:13:16.536145] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.461 [2024-07-25 12:13:16.536163] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.461 [2024-07-25 12:13:16.536170] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.461 [2024-07-25 12:13:16.536176] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.461 [2024-07-25 12:13:16.536192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.461 qpair failed and we were unable to recover it. 00:27:29.461 [2024-07-25 12:13:16.546023] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.461 [2024-07-25 12:13:16.546171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.461 [2024-07-25 12:13:16.546189] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.461 [2024-07-25 12:13:16.546196] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.461 [2024-07-25 12:13:16.546202] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.461 [2024-07-25 12:13:16.546218] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.461 qpair failed and we were unable to recover it. 00:27:29.461 [2024-07-25 12:13:16.555985] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.461 [2024-07-25 12:13:16.556172] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.461 [2024-07-25 12:13:16.556191] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.461 [2024-07-25 12:13:16.556202] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.461 [2024-07-25 12:13:16.556208] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.461 [2024-07-25 12:13:16.556224] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.461 qpair failed and we were unable to recover it. 00:27:29.461 [2024-07-25 12:13:16.566081] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.461 [2024-07-25 12:13:16.566232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.461 [2024-07-25 12:13:16.566251] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.461 [2024-07-25 12:13:16.566258] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.461 [2024-07-25 12:13:16.566264] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.461 [2024-07-25 12:13:16.566281] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.462 qpair failed and we were unable to recover it. 00:27:29.462 [2024-07-25 12:13:16.576100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.462 [2024-07-25 12:13:16.576247] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.462 [2024-07-25 12:13:16.576265] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.462 [2024-07-25 12:13:16.576272] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.462 [2024-07-25 12:13:16.576278] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.462 [2024-07-25 12:13:16.576295] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.462 qpair failed and we were unable to recover it. 00:27:29.462 [2024-07-25 12:13:16.586108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.462 [2024-07-25 12:13:16.586256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.462 [2024-07-25 12:13:16.586275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.462 [2024-07-25 12:13:16.586282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.462 [2024-07-25 12:13:16.586288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.462 [2024-07-25 12:13:16.586304] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.462 qpair failed and we were unable to recover it. 00:27:29.462 [2024-07-25 12:13:16.596380] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.462 [2024-07-25 12:13:16.596516] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.462 [2024-07-25 12:13:16.596534] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.462 [2024-07-25 12:13:16.596541] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.462 [2024-07-25 12:13:16.596547] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.462 [2024-07-25 12:13:16.596564] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.462 qpair failed and we were unable to recover it. 00:27:29.462 [2024-07-25 12:13:16.606185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.462 [2024-07-25 12:13:16.606329] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.462 [2024-07-25 12:13:16.606348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.462 [2024-07-25 12:13:16.606355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.462 [2024-07-25 12:13:16.606361] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.462 [2024-07-25 12:13:16.606377] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.462 qpair failed and we were unable to recover it. 00:27:29.462 [2024-07-25 12:13:16.616226] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.462 [2024-07-25 12:13:16.616371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.462 [2024-07-25 12:13:16.616390] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.462 [2024-07-25 12:13:16.616397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.462 [2024-07-25 12:13:16.616403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.462 [2024-07-25 12:13:16.616419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.462 qpair failed and we were unable to recover it. 00:27:29.462 [2024-07-25 12:13:16.626181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.462 [2024-07-25 12:13:16.626327] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.462 [2024-07-25 12:13:16.626346] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.462 [2024-07-25 12:13:16.626353] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.462 [2024-07-25 12:13:16.626359] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.462 [2024-07-25 12:13:16.626375] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.462 qpair failed and we were unable to recover it. 00:27:29.462 [2024-07-25 12:13:16.636277] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.462 [2024-07-25 12:13:16.636423] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.462 [2024-07-25 12:13:16.636442] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.462 [2024-07-25 12:13:16.636449] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.462 [2024-07-25 12:13:16.636454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.462 [2024-07-25 12:13:16.636471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.462 qpair failed and we were unable to recover it. 00:27:29.462 [2024-07-25 12:13:16.646323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.462 [2024-07-25 12:13:16.646463] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.462 [2024-07-25 12:13:16.646481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.462 [2024-07-25 12:13:16.646492] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.462 [2024-07-25 12:13:16.646498] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.462 [2024-07-25 12:13:16.646514] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.462 qpair failed and we were unable to recover it. 00:27:29.462 [2024-07-25 12:13:16.656342] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.462 [2024-07-25 12:13:16.656492] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.462 [2024-07-25 12:13:16.656510] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.462 [2024-07-25 12:13:16.656517] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.462 [2024-07-25 12:13:16.656523] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.462 [2024-07-25 12:13:16.656539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.462 qpair failed and we were unable to recover it. 00:27:29.462 [2024-07-25 12:13:16.666389] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.462 [2024-07-25 12:13:16.666551] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.462 [2024-07-25 12:13:16.666569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.462 [2024-07-25 12:13:16.666576] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.462 [2024-07-25 12:13:16.666582] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.462 [2024-07-25 12:13:16.666598] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.462 qpair failed and we were unable to recover it. 00:27:29.462 [2024-07-25 12:13:16.676318] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.462 [2024-07-25 12:13:16.676458] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.462 [2024-07-25 12:13:16.676476] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.462 [2024-07-25 12:13:16.676482] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.462 [2024-07-25 12:13:16.676488] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.462 [2024-07-25 12:13:16.676504] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.462 qpair failed and we were unable to recover it. 00:27:29.462 [2024-07-25 12:13:16.686429] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.462 [2024-07-25 12:13:16.686570] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.462 [2024-07-25 12:13:16.686588] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.462 [2024-07-25 12:13:16.686595] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.462 [2024-07-25 12:13:16.686601] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.462 [2024-07-25 12:13:16.686617] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.462 qpair failed and we were unable to recover it. 00:27:29.462 [2024-07-25 12:13:16.696371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.462 [2024-07-25 12:13:16.696573] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.462 [2024-07-25 12:13:16.696591] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.462 [2024-07-25 12:13:16.696598] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.462 [2024-07-25 12:13:16.696604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.462 [2024-07-25 12:13:16.696621] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.463 qpair failed and we were unable to recover it. 00:27:29.463 [2024-07-25 12:13:16.706456] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.463 [2024-07-25 12:13:16.706594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.463 [2024-07-25 12:13:16.706612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.463 [2024-07-25 12:13:16.706619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.463 [2024-07-25 12:13:16.706625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.463 [2024-07-25 12:13:16.706641] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.463 qpair failed and we were unable to recover it. 00:27:29.724 [2024-07-25 12:13:16.716511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.724 [2024-07-25 12:13:16.716656] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.724 [2024-07-25 12:13:16.716674] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.724 [2024-07-25 12:13:16.716682] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.724 [2024-07-25 12:13:16.716688] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.724 [2024-07-25 12:13:16.716704] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-07-25 12:13:16.726542] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.724 [2024-07-25 12:13:16.726688] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.724 [2024-07-25 12:13:16.726707] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.724 [2024-07-25 12:13:16.726714] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.724 [2024-07-25 12:13:16.726720] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.724 [2024-07-25 12:13:16.726737] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-07-25 12:13:16.736531] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.724 [2024-07-25 12:13:16.736676] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.724 [2024-07-25 12:13:16.736695] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.724 [2024-07-25 12:13:16.736705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.724 [2024-07-25 12:13:16.736711] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.724 [2024-07-25 12:13:16.736727] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-07-25 12:13:16.746590] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.724 [2024-07-25 12:13:16.746735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.724 [2024-07-25 12:13:16.746753] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.724 [2024-07-25 12:13:16.746760] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.724 [2024-07-25 12:13:16.746767] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.724 [2024-07-25 12:13:16.746783] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-07-25 12:13:16.756620] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.724 [2024-07-25 12:13:16.756762] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.724 [2024-07-25 12:13:16.756780] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.724 [2024-07-25 12:13:16.756787] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.724 [2024-07-25 12:13:16.756793] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.724 [2024-07-25 12:13:16.756810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-07-25 12:13:16.766652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.724 [2024-07-25 12:13:16.766798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.724 [2024-07-25 12:13:16.766818] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.724 [2024-07-25 12:13:16.766825] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.724 [2024-07-25 12:13:16.766831] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.724 [2024-07-25 12:13:16.766847] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-07-25 12:13:16.776684] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.724 [2024-07-25 12:13:16.776829] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.724 [2024-07-25 12:13:16.776848] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.724 [2024-07-25 12:13:16.776854] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.724 [2024-07-25 12:13:16.776860] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.724 [2024-07-25 12:13:16.776877] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-07-25 12:13:16.786680] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.724 [2024-07-25 12:13:16.786835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.724 [2024-07-25 12:13:16.786853] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.724 [2024-07-25 12:13:16.786860] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.724 [2024-07-25 12:13:16.786866] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.724 [2024-07-25 12:13:16.786883] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-07-25 12:13:16.796768] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.724 [2024-07-25 12:13:16.796927] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.724 [2024-07-25 12:13:16.796945] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.724 [2024-07-25 12:13:16.796952] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.724 [2024-07-25 12:13:16.796958] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.724 [2024-07-25 12:13:16.796974] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-07-25 12:13:16.806773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.724 [2024-07-25 12:13:16.806913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.724 [2024-07-25 12:13:16.806932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.724 [2024-07-25 12:13:16.806939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.724 [2024-07-25 12:13:16.806945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.724 [2024-07-25 12:13:16.806961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-07-25 12:13:16.816793] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.724 [2024-07-25 12:13:16.816940] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.724 [2024-07-25 12:13:16.816959] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.724 [2024-07-25 12:13:16.816966] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.724 [2024-07-25 12:13:16.816972] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.724 [2024-07-25 12:13:16.816988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-07-25 12:13:16.826751] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.724 [2024-07-25 12:13:16.826896] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.724 [2024-07-25 12:13:16.826917] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.724 [2024-07-25 12:13:16.826924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.724 [2024-07-25 12:13:16.826930] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.724 [2024-07-25 12:13:16.826947] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.724 qpair failed and we were unable to recover it. 00:27:29.724 [2024-07-25 12:13:16.836852] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.724 [2024-07-25 12:13:16.836997] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.724 [2024-07-25 12:13:16.837015] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.725 [2024-07-25 12:13:16.837022] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.725 [2024-07-25 12:13:16.837029] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.725 [2024-07-25 12:13:16.837052] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-07-25 12:13:16.846889] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.725 [2024-07-25 12:13:16.847035] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.725 [2024-07-25 12:13:16.847060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.725 [2024-07-25 12:13:16.847067] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.725 [2024-07-25 12:13:16.847073] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.725 [2024-07-25 12:13:16.847090] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-07-25 12:13:16.856905] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.725 [2024-07-25 12:13:16.857060] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.725 [2024-07-25 12:13:16.857078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.725 [2024-07-25 12:13:16.857084] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.725 [2024-07-25 12:13:16.857090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.725 [2024-07-25 12:13:16.857106] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-07-25 12:13:16.866932] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.725 [2024-07-25 12:13:16.867084] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.725 [2024-07-25 12:13:16.867103] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.725 [2024-07-25 12:13:16.867110] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.725 [2024-07-25 12:13:16.867116] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.725 [2024-07-25 12:13:16.867133] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-07-25 12:13:16.876973] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.725 [2024-07-25 12:13:16.877123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.725 [2024-07-25 12:13:16.877141] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.725 [2024-07-25 12:13:16.877148] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.725 [2024-07-25 12:13:16.877154] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.725 [2024-07-25 12:13:16.877171] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-07-25 12:13:16.887001] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.725 [2024-07-25 12:13:16.887153] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.725 [2024-07-25 12:13:16.887172] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.725 [2024-07-25 12:13:16.887179] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.725 [2024-07-25 12:13:16.887185] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.725 [2024-07-25 12:13:16.887202] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-07-25 12:13:16.897268] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.725 [2024-07-25 12:13:16.897462] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.725 [2024-07-25 12:13:16.897481] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.725 [2024-07-25 12:13:16.897488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.725 [2024-07-25 12:13:16.897494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.725 [2024-07-25 12:13:16.897510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-07-25 12:13:16.907092] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.725 [2024-07-25 12:13:16.907239] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.725 [2024-07-25 12:13:16.907257] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.725 [2024-07-25 12:13:16.907264] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.725 [2024-07-25 12:13:16.907270] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.725 [2024-07-25 12:13:16.907287] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-07-25 12:13:16.917093] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.725 [2024-07-25 12:13:16.917232] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.725 [2024-07-25 12:13:16.917254] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.725 [2024-07-25 12:13:16.917261] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.725 [2024-07-25 12:13:16.917267] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.725 [2024-07-25 12:13:16.917283] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-07-25 12:13:16.927035] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.725 [2024-07-25 12:13:16.927186] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.725 [2024-07-25 12:13:16.927205] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.725 [2024-07-25 12:13:16.927212] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.725 [2024-07-25 12:13:16.927218] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.725 [2024-07-25 12:13:16.927234] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-07-25 12:13:16.937160] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.725 [2024-07-25 12:13:16.937316] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.725 [2024-07-25 12:13:16.937335] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.725 [2024-07-25 12:13:16.937342] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.725 [2024-07-25 12:13:16.937347] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.725 [2024-07-25 12:13:16.937364] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.725 [2024-07-25 12:13:16.947188] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.725 [2024-07-25 12:13:16.947333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.725 [2024-07-25 12:13:16.947352] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.725 [2024-07-25 12:13:16.947359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.725 [2024-07-25 12:13:16.947365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.725 [2024-07-25 12:13:16.947381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.725 qpair failed and we were unable to recover it. 00:27:29.726 [2024-07-25 12:13:16.957195] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.726 [2024-07-25 12:13:16.957341] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.726 [2024-07-25 12:13:16.957360] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.726 [2024-07-25 12:13:16.957366] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.726 [2024-07-25 12:13:16.957372] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.726 [2024-07-25 12:13:16.957395] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.726 [2024-07-25 12:13:16.967162] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.726 [2024-07-25 12:13:16.967304] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.726 [2024-07-25 12:13:16.967323] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.726 [2024-07-25 12:13:16.967330] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.726 [2024-07-25 12:13:16.967336] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.726 [2024-07-25 12:13:16.967352] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.726 qpair failed and we were unable to recover it. 00:27:29.986 [2024-07-25 12:13:16.977194] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.986 [2024-07-25 12:13:16.977351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.986 [2024-07-25 12:13:16.977370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.986 [2024-07-25 12:13:16.977377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.986 [2024-07-25 12:13:16.977384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.986 [2024-07-25 12:13:16.977400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.986 qpair failed and we were unable to recover it. 00:27:29.986 [2024-07-25 12:13:16.987270] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.986 [2024-07-25 12:13:16.987414] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.986 [2024-07-25 12:13:16.987433] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.986 [2024-07-25 12:13:16.987439] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.986 [2024-07-25 12:13:16.987446] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.986 [2024-07-25 12:13:16.987462] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.986 qpair failed and we were unable to recover it. 00:27:29.986 [2024-07-25 12:13:16.997371] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.986 [2024-07-25 12:13:16.997519] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.986 [2024-07-25 12:13:16.997537] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.986 [2024-07-25 12:13:16.997544] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.986 [2024-07-25 12:13:16.997550] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.986 [2024-07-25 12:13:16.997567] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.986 qpair failed and we were unable to recover it. 00:27:29.986 [2024-07-25 12:13:17.007379] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.986 [2024-07-25 12:13:17.007534] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.986 [2024-07-25 12:13:17.007556] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.986 [2024-07-25 12:13:17.007563] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.986 [2024-07-25 12:13:17.007569] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.987 [2024-07-25 12:13:17.007586] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.987 qpair failed and we were unable to recover it. 00:27:29.987 [2024-07-25 12:13:17.017323] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.987 [2024-07-25 12:13:17.017469] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.987 [2024-07-25 12:13:17.017488] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.987 [2024-07-25 12:13:17.017495] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.987 [2024-07-25 12:13:17.017501] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.987 [2024-07-25 12:13:17.017517] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.987 qpair failed and we were unable to recover it. 00:27:29.987 [2024-07-25 12:13:17.027423] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.987 [2024-07-25 12:13:17.027568] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.987 [2024-07-25 12:13:17.027586] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.987 [2024-07-25 12:13:17.027593] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.987 [2024-07-25 12:13:17.027599] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.987 [2024-07-25 12:13:17.027615] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.987 qpair failed and we were unable to recover it. 00:27:29.987 [2024-07-25 12:13:17.037449] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.987 [2024-07-25 12:13:17.037587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.987 [2024-07-25 12:13:17.037606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.987 [2024-07-25 12:13:17.037613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.987 [2024-07-25 12:13:17.037619] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.987 [2024-07-25 12:13:17.037635] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.987 qpair failed and we were unable to recover it. 00:27:29.987 [2024-07-25 12:13:17.047496] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.987 [2024-07-25 12:13:17.047647] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.987 [2024-07-25 12:13:17.047666] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.987 [2024-07-25 12:13:17.047673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.987 [2024-07-25 12:13:17.047678] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.987 [2024-07-25 12:13:17.047698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.987 qpair failed and we were unable to recover it. 00:27:29.987 [2024-07-25 12:13:17.057511] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.987 [2024-07-25 12:13:17.057660] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.987 [2024-07-25 12:13:17.057678] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.987 [2024-07-25 12:13:17.057685] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.987 [2024-07-25 12:13:17.057691] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.987 [2024-07-25 12:13:17.057707] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.987 qpair failed and we were unable to recover it. 00:27:29.987 [2024-07-25 12:13:17.067547] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.987 [2024-07-25 12:13:17.067696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.987 [2024-07-25 12:13:17.067714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.987 [2024-07-25 12:13:17.067722] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.987 [2024-07-25 12:13:17.067728] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.987 [2024-07-25 12:13:17.067744] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.987 qpair failed and we were unable to recover it. 00:27:29.987 [2024-07-25 12:13:17.077588] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.987 [2024-07-25 12:13:17.077732] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.987 [2024-07-25 12:13:17.077750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.987 [2024-07-25 12:13:17.077757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.987 [2024-07-25 12:13:17.077763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.987 [2024-07-25 12:13:17.077779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.987 qpair failed and we were unable to recover it. 00:27:29.987 [2024-07-25 12:13:17.087625] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.987 [2024-07-25 12:13:17.087806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.987 [2024-07-25 12:13:17.087824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.987 [2024-07-25 12:13:17.087832] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.987 [2024-07-25 12:13:17.087838] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.987 [2024-07-25 12:13:17.087855] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.987 qpair failed and we were unable to recover it. 00:27:29.987 [2024-07-25 12:13:17.097673] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.987 [2024-07-25 12:13:17.097835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.987 [2024-07-25 12:13:17.097857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.987 [2024-07-25 12:13:17.097864] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.987 [2024-07-25 12:13:17.097869] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.987 [2024-07-25 12:13:17.097886] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.987 qpair failed and we were unable to recover it. 00:27:29.987 [2024-07-25 12:13:17.107633] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.987 [2024-07-25 12:13:17.107774] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.987 [2024-07-25 12:13:17.107793] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.987 [2024-07-25 12:13:17.107800] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.987 [2024-07-25 12:13:17.107806] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.987 [2024-07-25 12:13:17.107822] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.987 qpair failed and we were unable to recover it. 00:27:29.987 [2024-07-25 12:13:17.117660] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.987 [2024-07-25 12:13:17.117803] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.987 [2024-07-25 12:13:17.117821] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.987 [2024-07-25 12:13:17.117828] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.987 [2024-07-25 12:13:17.117835] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.987 [2024-07-25 12:13:17.117851] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.987 qpair failed and we were unable to recover it. 00:27:29.987 [2024-07-25 12:13:17.127652] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.987 [2024-07-25 12:13:17.127806] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.987 [2024-07-25 12:13:17.127824] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.987 [2024-07-25 12:13:17.127831] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.987 [2024-07-25 12:13:17.127837] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.987 [2024-07-25 12:13:17.127853] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.987 qpair failed and we were unable to recover it. 00:27:29.987 [2024-07-25 12:13:17.137758] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.987 [2024-07-25 12:13:17.137905] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.987 [2024-07-25 12:13:17.137924] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.987 [2024-07-25 12:13:17.137931] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.987 [2024-07-25 12:13:17.137940] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.987 [2024-07-25 12:13:17.137956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.987 qpair failed and we were unable to recover it. 00:27:29.987 [2024-07-25 12:13:17.147776] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.988 [2024-07-25 12:13:17.147921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.988 [2024-07-25 12:13:17.147939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.988 [2024-07-25 12:13:17.147946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.988 [2024-07-25 12:13:17.147952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.988 [2024-07-25 12:13:17.147969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-07-25 12:13:17.157799] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.988 [2024-07-25 12:13:17.157941] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.988 [2024-07-25 12:13:17.157960] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.988 [2024-07-25 12:13:17.157967] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.988 [2024-07-25 12:13:17.157973] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.988 [2024-07-25 12:13:17.157990] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-07-25 12:13:17.167766] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.988 [2024-07-25 12:13:17.167915] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.988 [2024-07-25 12:13:17.167934] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.988 [2024-07-25 12:13:17.167941] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.988 [2024-07-25 12:13:17.167947] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.988 [2024-07-25 12:13:17.167964] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-07-25 12:13:17.177867] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.988 [2024-07-25 12:13:17.178011] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.988 [2024-07-25 12:13:17.178030] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.988 [2024-07-25 12:13:17.178037] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.988 [2024-07-25 12:13:17.178049] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.988 [2024-07-25 12:13:17.178066] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-07-25 12:13:17.187847] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.988 [2024-07-25 12:13:17.188036] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.988 [2024-07-25 12:13:17.188062] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.988 [2024-07-25 12:13:17.188069] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.988 [2024-07-25 12:13:17.188075] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.988 [2024-07-25 12:13:17.188092] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-07-25 12:13:17.197908] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.988 [2024-07-25 12:13:17.198059] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.988 [2024-07-25 12:13:17.198078] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.988 [2024-07-25 12:13:17.198085] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.988 [2024-07-25 12:13:17.198090] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.988 [2024-07-25 12:13:17.198108] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-07-25 12:13:17.207941] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.988 [2024-07-25 12:13:17.208091] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.988 [2024-07-25 12:13:17.208116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.988 [2024-07-25 12:13:17.208130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.988 [2024-07-25 12:13:17.208136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.988 [2024-07-25 12:13:17.208152] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-07-25 12:13:17.217972] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.988 [2024-07-25 12:13:17.218128] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.988 [2024-07-25 12:13:17.218147] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.988 [2024-07-25 12:13:17.218154] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.988 [2024-07-25 12:13:17.218162] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.988 [2024-07-25 12:13:17.218179] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.988 qpair failed and we were unable to recover it. 00:27:29.988 [2024-07-25 12:13:17.227920] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:29.988 [2024-07-25 12:13:17.228068] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:29.988 [2024-07-25 12:13:17.228088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:29.988 [2024-07-25 12:13:17.228095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:29.988 [2024-07-25 12:13:17.228105] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:29.988 [2024-07-25 12:13:17.228122] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:29.988 qpair failed and we were unable to recover it. 00:27:30.249 [2024-07-25 12:13:17.238058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.249 [2024-07-25 12:13:17.238217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.249 [2024-07-25 12:13:17.238236] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.249 [2024-07-25 12:13:17.238243] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.249 [2024-07-25 12:13:17.238250] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.249 [2024-07-25 12:13:17.238266] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.249 qpair failed and we were unable to recover it. 00:27:30.249 [2024-07-25 12:13:17.248060] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.249 [2024-07-25 12:13:17.248208] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.249 [2024-07-25 12:13:17.248227] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.249 [2024-07-25 12:13:17.248234] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.249 [2024-07-25 12:13:17.248241] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.249 [2024-07-25 12:13:17.248258] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.249 qpair failed and we were unable to recover it. 00:27:30.249 [2024-07-25 12:13:17.258100] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.249 [2024-07-25 12:13:17.258245] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.249 [2024-07-25 12:13:17.258263] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.249 [2024-07-25 12:13:17.258270] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.249 [2024-07-25 12:13:17.258276] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.249 [2024-07-25 12:13:17.258293] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.249 qpair failed and we were unable to recover it. 00:27:30.249 [2024-07-25 12:13:17.268111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.249 [2024-07-25 12:13:17.268255] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.249 [2024-07-25 12:13:17.268273] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.249 [2024-07-25 12:13:17.268280] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.249 [2024-07-25 12:13:17.268286] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.249 [2024-07-25 12:13:17.268303] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.249 qpair failed and we were unable to recover it. 00:27:30.249 [2024-07-25 12:13:17.278141] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.249 [2024-07-25 12:13:17.278284] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.249 [2024-07-25 12:13:17.278302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.249 [2024-07-25 12:13:17.278309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.249 [2024-07-25 12:13:17.278315] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.249 [2024-07-25 12:13:17.278332] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.249 qpair failed and we were unable to recover it. 00:27:30.249 [2024-07-25 12:13:17.288172] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.249 [2024-07-25 12:13:17.288317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.249 [2024-07-25 12:13:17.288336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.249 [2024-07-25 12:13:17.288343] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.249 [2024-07-25 12:13:17.288350] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.249 [2024-07-25 12:13:17.288366] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.249 qpair failed and we were unable to recover it. 00:27:30.249 [2024-07-25 12:13:17.298205] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.249 [2024-07-25 12:13:17.298351] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.249 [2024-07-25 12:13:17.298370] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.249 [2024-07-25 12:13:17.298377] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.249 [2024-07-25 12:13:17.298384] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.249 [2024-07-25 12:13:17.298400] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.249 qpair failed and we were unable to recover it. 00:27:30.249 [2024-07-25 12:13:17.308224] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.249 [2024-07-25 12:13:17.308382] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.249 [2024-07-25 12:13:17.308401] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.249 [2024-07-25 12:13:17.308408] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.249 [2024-07-25 12:13:17.308415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.249 [2024-07-25 12:13:17.308431] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.249 qpair failed and we were unable to recover it. 00:27:30.249 [2024-07-25 12:13:17.318181] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.249 [2024-07-25 12:13:17.318330] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.249 [2024-07-25 12:13:17.318348] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.249 [2024-07-25 12:13:17.318355] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.249 [2024-07-25 12:13:17.318365] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.249 [2024-07-25 12:13:17.318381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.249 qpair failed and we were unable to recover it. 00:27:30.249 [2024-07-25 12:13:17.328279] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.249 [2024-07-25 12:13:17.328438] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.249 [2024-07-25 12:13:17.328456] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.249 [2024-07-25 12:13:17.328463] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.249 [2024-07-25 12:13:17.328469] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.249 [2024-07-25 12:13:17.328487] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.249 qpair failed and we were unable to recover it. 00:27:30.249 [2024-07-25 12:13:17.338307] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.249 [2024-07-25 12:13:17.338452] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.249 [2024-07-25 12:13:17.338471] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.249 [2024-07-25 12:13:17.338478] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.249 [2024-07-25 12:13:17.338484] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.249 [2024-07-25 12:13:17.338500] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.249 qpair failed and we were unable to recover it. 00:27:30.249 [2024-07-25 12:13:17.348343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.249 [2024-07-25 12:13:17.348482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.249 [2024-07-25 12:13:17.348500] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.249 [2024-07-25 12:13:17.348508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.249 [2024-07-25 12:13:17.348514] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.250 [2024-07-25 12:13:17.348531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.250 qpair failed and we were unable to recover it. 00:27:30.250 [2024-07-25 12:13:17.358362] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.250 [2024-07-25 12:13:17.358503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.250 [2024-07-25 12:13:17.358522] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.250 [2024-07-25 12:13:17.358529] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.250 [2024-07-25 12:13:17.358535] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.250 [2024-07-25 12:13:17.358551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.250 qpair failed and we were unable to recover it. 00:27:30.250 [2024-07-25 12:13:17.368394] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.250 [2024-07-25 12:13:17.368540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.250 [2024-07-25 12:13:17.368558] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.250 [2024-07-25 12:13:17.368565] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.250 [2024-07-25 12:13:17.368571] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.250 [2024-07-25 12:13:17.368587] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.250 qpair failed and we were unable to recover it. 00:27:30.250 [2024-07-25 12:13:17.378424] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.250 [2024-07-25 12:13:17.378571] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.250 [2024-07-25 12:13:17.378589] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.250 [2024-07-25 12:13:17.378597] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.250 [2024-07-25 12:13:17.378604] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.250 [2024-07-25 12:13:17.378620] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.250 qpair failed and we were unable to recover it. 00:27:30.250 [2024-07-25 12:13:17.388444] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.250 [2024-07-25 12:13:17.388587] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.250 [2024-07-25 12:13:17.388606] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.250 [2024-07-25 12:13:17.388613] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.250 [2024-07-25 12:13:17.388620] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.250 [2024-07-25 12:13:17.388636] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.250 qpair failed and we were unable to recover it. 00:27:30.250 [2024-07-25 12:13:17.398479] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.250 [2024-07-25 12:13:17.398621] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.250 [2024-07-25 12:13:17.398640] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.250 [2024-07-25 12:13:17.398646] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.250 [2024-07-25 12:13:17.398652] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.250 [2024-07-25 12:13:17.398669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.250 qpair failed and we were unable to recover it. 00:27:30.250 [2024-07-25 12:13:17.408498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.250 [2024-07-25 12:13:17.408643] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.250 [2024-07-25 12:13:17.408661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.250 [2024-07-25 12:13:17.408673] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.250 [2024-07-25 12:13:17.408679] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.250 [2024-07-25 12:13:17.408696] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.250 qpair failed and we were unable to recover it. 00:27:30.250 [2024-07-25 12:13:17.418522] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.250 [2024-07-25 12:13:17.418667] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.250 [2024-07-25 12:13:17.418686] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.250 [2024-07-25 12:13:17.418693] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.250 [2024-07-25 12:13:17.418699] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.250 [2024-07-25 12:13:17.418715] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.250 qpair failed and we were unable to recover it. 00:27:30.250 [2024-07-25 12:13:17.428557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.250 [2024-07-25 12:13:17.428703] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.250 [2024-07-25 12:13:17.428722] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.250 [2024-07-25 12:13:17.428729] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.250 [2024-07-25 12:13:17.428735] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.250 [2024-07-25 12:13:17.428751] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.250 qpair failed and we were unable to recover it. 00:27:30.250 [2024-07-25 12:13:17.438603] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.250 [2024-07-25 12:13:17.438743] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.250 [2024-07-25 12:13:17.438762] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.250 [2024-07-25 12:13:17.438769] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.250 [2024-07-25 12:13:17.438776] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.250 [2024-07-25 12:13:17.438792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.250 qpair failed and we were unable to recover it. 00:27:30.250 [2024-07-25 12:13:17.448611] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.250 [2024-07-25 12:13:17.448759] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.250 [2024-07-25 12:13:17.448777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.250 [2024-07-25 12:13:17.448784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.250 [2024-07-25 12:13:17.448791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.250 [2024-07-25 12:13:17.448807] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.250 qpair failed and we were unable to recover it. 00:27:30.250 [2024-07-25 12:13:17.458636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.250 [2024-07-25 12:13:17.458782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.250 [2024-07-25 12:13:17.458801] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.250 [2024-07-25 12:13:17.458808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.250 [2024-07-25 12:13:17.458815] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.250 [2024-07-25 12:13:17.458831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.250 qpair failed and we were unable to recover it. 00:27:30.250 [2024-07-25 12:13:17.468653] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.250 [2024-07-25 12:13:17.468798] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.250 [2024-07-25 12:13:17.468816] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.250 [2024-07-25 12:13:17.468823] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.250 [2024-07-25 12:13:17.468829] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.250 [2024-07-25 12:13:17.468845] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.250 qpair failed and we were unable to recover it. 00:27:30.250 [2024-07-25 12:13:17.478616] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.250 [2024-07-25 12:13:17.478757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.250 [2024-07-25 12:13:17.478775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.250 [2024-07-25 12:13:17.478782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.250 [2024-07-25 12:13:17.478789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.250 [2024-07-25 12:13:17.478805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.250 qpair failed and we were unable to recover it. 00:27:30.251 [2024-07-25 12:13:17.488714] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.251 [2024-07-25 12:13:17.488861] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.251 [2024-07-25 12:13:17.488880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.251 [2024-07-25 12:13:17.488887] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.251 [2024-07-25 12:13:17.488893] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.251 [2024-07-25 12:13:17.488910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.251 qpair failed and we were unable to recover it. 00:27:30.510 [2024-07-25 12:13:17.498745] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.510 [2024-07-25 12:13:17.498894] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.510 [2024-07-25 12:13:17.498913] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.510 [2024-07-25 12:13:17.498924] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.510 [2024-07-25 12:13:17.498931] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.510 [2024-07-25 12:13:17.498948] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.510 qpair failed and we were unable to recover it. 00:27:30.510 [2024-07-25 12:13:17.508713] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.510 [2024-07-25 12:13:17.508862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.510 [2024-07-25 12:13:17.508881] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.510 [2024-07-25 12:13:17.508888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.510 [2024-07-25 12:13:17.508894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.510 [2024-07-25 12:13:17.508910] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.510 qpair failed and we were unable to recover it. 00:27:30.510 [2024-07-25 12:13:17.518767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.510 [2024-07-25 12:13:17.518922] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.510 [2024-07-25 12:13:17.518940] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.510 [2024-07-25 12:13:17.518946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.510 [2024-07-25 12:13:17.518953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.510 [2024-07-25 12:13:17.518969] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.510 qpair failed and we were unable to recover it. 00:27:30.510 [2024-07-25 12:13:17.528862] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.511 [2024-07-25 12:13:17.529006] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.511 [2024-07-25 12:13:17.529024] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.511 [2024-07-25 12:13:17.529031] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.511 [2024-07-25 12:13:17.529037] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.511 [2024-07-25 12:13:17.529061] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.511 qpair failed and we were unable to recover it. 00:27:30.511 [2024-07-25 12:13:17.538767] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.511 [2024-07-25 12:13:17.538911] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.511 [2024-07-25 12:13:17.538929] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.511 [2024-07-25 12:13:17.538936] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.511 [2024-07-25 12:13:17.538943] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.511 [2024-07-25 12:13:17.538959] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.511 qpair failed and we were unable to recover it. 00:27:30.511 [2024-07-25 12:13:17.548812] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.511 [2024-07-25 12:13:17.548955] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.511 [2024-07-25 12:13:17.548973] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.511 [2024-07-25 12:13:17.548980] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.511 [2024-07-25 12:13:17.548987] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.511 [2024-07-25 12:13:17.549003] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.511 qpair failed and we were unable to recover it. 00:27:30.511 [2024-07-25 12:13:17.558826] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.511 [2024-07-25 12:13:17.558968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.511 [2024-07-25 12:13:17.558986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.511 [2024-07-25 12:13:17.558993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.511 [2024-07-25 12:13:17.558999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.511 [2024-07-25 12:13:17.559016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.511 qpair failed and we were unable to recover it. 00:27:30.511 [2024-07-25 12:13:17.568923] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.511 [2024-07-25 12:13:17.569075] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.511 [2024-07-25 12:13:17.569094] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.511 [2024-07-25 12:13:17.569101] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.511 [2024-07-25 12:13:17.569108] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.511 [2024-07-25 12:13:17.569124] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.511 qpair failed and we were unable to recover it. 00:27:30.511 [2024-07-25 12:13:17.578935] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.511 [2024-07-25 12:13:17.579086] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.511 [2024-07-25 12:13:17.579104] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.511 [2024-07-25 12:13:17.579111] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.511 [2024-07-25 12:13:17.579118] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.511 [2024-07-25 12:13:17.579135] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.511 qpair failed and we were unable to recover it. 00:27:30.511 [2024-07-25 12:13:17.588993] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.511 [2024-07-25 12:13:17.589137] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.511 [2024-07-25 12:13:17.589159] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.511 [2024-07-25 12:13:17.589166] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.511 [2024-07-25 12:13:17.589173] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.511 [2024-07-25 12:13:17.589189] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.511 qpair failed and we were unable to recover it. 00:27:30.511 [2024-07-25 12:13:17.598977] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.511 [2024-07-25 12:13:17.599123] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.511 [2024-07-25 12:13:17.599142] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.511 [2024-07-25 12:13:17.599149] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.511 [2024-07-25 12:13:17.599155] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.511 [2024-07-25 12:13:17.599173] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.511 qpair failed and we were unable to recover it. 00:27:30.511 [2024-07-25 12:13:17.609065] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.511 [2024-07-25 12:13:17.609212] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.511 [2024-07-25 12:13:17.609230] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.511 [2024-07-25 12:13:17.609238] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.511 [2024-07-25 12:13:17.609244] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.511 [2024-07-25 12:13:17.609262] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.511 qpair failed and we were unable to recover it. 00:27:30.511 [2024-07-25 12:13:17.619068] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.511 [2024-07-25 12:13:17.619217] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.511 [2024-07-25 12:13:17.619235] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.511 [2024-07-25 12:13:17.619242] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.511 [2024-07-25 12:13:17.619248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.511 [2024-07-25 12:13:17.619265] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.511 qpair failed and we were unable to recover it. 00:27:30.511 [2024-07-25 12:13:17.629118] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.511 [2024-07-25 12:13:17.629267] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.511 [2024-07-25 12:13:17.629285] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.511 [2024-07-25 12:13:17.629292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.511 [2024-07-25 12:13:17.629299] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.511 [2024-07-25 12:13:17.629316] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.511 qpair failed and we were unable to recover it. 00:27:30.511 [2024-07-25 12:13:17.639079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.511 [2024-07-25 12:13:17.639226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.511 [2024-07-25 12:13:17.639244] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.511 [2024-07-25 12:13:17.639251] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.511 [2024-07-25 12:13:17.639258] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.511 [2024-07-25 12:13:17.639275] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.511 qpair failed and we were unable to recover it. 00:27:30.511 [2024-07-25 12:13:17.649209] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.511 [2024-07-25 12:13:17.649359] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.511 [2024-07-25 12:13:17.649377] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.511 [2024-07-25 12:13:17.649384] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.511 [2024-07-25 12:13:17.649390] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.511 [2024-07-25 12:13:17.649406] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.511 qpair failed and we were unable to recover it. 00:27:30.511 [2024-07-25 12:13:17.659206] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.511 [2024-07-25 12:13:17.659354] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.512 [2024-07-25 12:13:17.659373] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.512 [2024-07-25 12:13:17.659379] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.512 [2024-07-25 12:13:17.659386] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.512 [2024-07-25 12:13:17.659403] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.512 qpair failed and we were unable to recover it. 00:27:30.512 [2024-07-25 12:13:17.669216] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.512 [2024-07-25 12:13:17.669361] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.512 [2024-07-25 12:13:17.669379] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.512 [2024-07-25 12:13:17.669386] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.512 [2024-07-25 12:13:17.669393] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.512 [2024-07-25 12:13:17.669410] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.512 qpair failed and we were unable to recover it. 00:27:30.512 [2024-07-25 12:13:17.679222] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.512 [2024-07-25 12:13:17.679367] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.512 [2024-07-25 12:13:17.679389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.512 [2024-07-25 12:13:17.679397] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.512 [2024-07-25 12:13:17.679403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.512 [2024-07-25 12:13:17.679420] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.512 qpair failed and we were unable to recover it. 00:27:30.512 [2024-07-25 12:13:17.689325] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.512 [2024-07-25 12:13:17.689510] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.512 [2024-07-25 12:13:17.689528] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.512 [2024-07-25 12:13:17.689535] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.512 [2024-07-25 12:13:17.689542] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.512 [2024-07-25 12:13:17.689558] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.512 qpair failed and we were unable to recover it. 00:27:30.512 [2024-07-25 12:13:17.699346] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.512 [2024-07-25 12:13:17.699490] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.512 [2024-07-25 12:13:17.699508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.512 [2024-07-25 12:13:17.699515] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.512 [2024-07-25 12:13:17.699521] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.512 [2024-07-25 12:13:17.699538] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.512 qpair failed and we were unable to recover it. 00:27:30.512 [2024-07-25 12:13:17.709266] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.512 [2024-07-25 12:13:17.709413] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.512 [2024-07-25 12:13:17.709431] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.512 [2024-07-25 12:13:17.709438] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.512 [2024-07-25 12:13:17.709445] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.512 [2024-07-25 12:13:17.709461] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.512 qpair failed and we were unable to recover it. 00:27:30.512 [2024-07-25 12:13:17.719302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.512 [2024-07-25 12:13:17.719444] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.512 [2024-07-25 12:13:17.719462] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.512 [2024-07-25 12:13:17.719470] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.512 [2024-07-25 12:13:17.719476] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.512 [2024-07-25 12:13:17.719499] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.512 qpair failed and we were unable to recover it. 00:27:30.512 [2024-07-25 12:13:17.729331] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.512 [2024-07-25 12:13:17.729479] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.512 [2024-07-25 12:13:17.729498] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.512 [2024-07-25 12:13:17.729505] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.512 [2024-07-25 12:13:17.729512] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.512 [2024-07-25 12:13:17.729528] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.512 qpair failed and we were unable to recover it. 00:27:30.512 [2024-07-25 12:13:17.739439] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.512 [2024-07-25 12:13:17.739599] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.512 [2024-07-25 12:13:17.739617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.512 [2024-07-25 12:13:17.739624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.512 [2024-07-25 12:13:17.739631] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.512 [2024-07-25 12:13:17.739647] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.512 qpair failed and we were unable to recover it. 00:27:30.512 [2024-07-25 12:13:17.749385] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.512 [2024-07-25 12:13:17.749525] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.512 [2024-07-25 12:13:17.749544] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.512 [2024-07-25 12:13:17.749551] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.512 [2024-07-25 12:13:17.749557] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.512 [2024-07-25 12:13:17.749573] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.512 qpair failed and we were unable to recover it. 00:27:30.772 [2024-07-25 12:13:17.759473] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.772 [2024-07-25 12:13:17.759616] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.772 [2024-07-25 12:13:17.759635] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.772 [2024-07-25 12:13:17.759642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.772 [2024-07-25 12:13:17.759649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.772 [2024-07-25 12:13:17.759665] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.772 qpair failed and we were unable to recover it. 00:27:30.772 [2024-07-25 12:13:17.769452] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.772 [2024-07-25 12:13:17.769596] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.772 [2024-07-25 12:13:17.769619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.772 [2024-07-25 12:13:17.769626] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.772 [2024-07-25 12:13:17.769632] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.772 [2024-07-25 12:13:17.769648] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.772 qpair failed and we were unable to recover it. 00:27:30.772 [2024-07-25 12:13:17.779477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.772 [2024-07-25 12:13:17.779618] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.772 [2024-07-25 12:13:17.779637] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.772 [2024-07-25 12:13:17.779644] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.772 [2024-07-25 12:13:17.779650] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.772 [2024-07-25 12:13:17.779667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.772 qpair failed and we were unable to recover it. 00:27:30.772 [2024-07-25 12:13:17.789580] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.772 [2024-07-25 12:13:17.789726] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.773 [2024-07-25 12:13:17.789744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.773 [2024-07-25 12:13:17.789751] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.773 [2024-07-25 12:13:17.789757] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.773 [2024-07-25 12:13:17.789774] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.773 qpair failed and we were unable to recover it. 00:27:30.773 [2024-07-25 12:13:17.799528] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.773 [2024-07-25 12:13:17.799671] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.773 [2024-07-25 12:13:17.799689] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.773 [2024-07-25 12:13:17.799696] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.773 [2024-07-25 12:13:17.799702] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.773 [2024-07-25 12:13:17.799718] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.773 qpair failed and we were unable to recover it. 00:27:30.773 [2024-07-25 12:13:17.809636] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.773 [2024-07-25 12:13:17.809778] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.773 [2024-07-25 12:13:17.809796] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.773 [2024-07-25 12:13:17.809804] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.773 [2024-07-25 12:13:17.809810] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.773 [2024-07-25 12:13:17.809830] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.773 qpair failed and we were unable to recover it. 00:27:30.773 [2024-07-25 12:13:17.819639] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.773 [2024-07-25 12:13:17.819782] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.773 [2024-07-25 12:13:17.819800] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.773 [2024-07-25 12:13:17.819808] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.773 [2024-07-25 12:13:17.819814] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.773 [2024-07-25 12:13:17.819831] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.773 qpair failed and we were unable to recover it. 00:27:30.773 [2024-07-25 12:13:17.829686] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.773 [2024-07-25 12:13:17.829849] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.773 [2024-07-25 12:13:17.829867] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.773 [2024-07-25 12:13:17.829874] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.773 [2024-07-25 12:13:17.829880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.773 [2024-07-25 12:13:17.829898] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.773 qpair failed and we were unable to recover it. 00:27:30.773 [2024-07-25 12:13:17.839708] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.773 [2024-07-25 12:13:17.839870] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.773 [2024-07-25 12:13:17.839890] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.773 [2024-07-25 12:13:17.839897] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.773 [2024-07-25 12:13:17.839904] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.773 [2024-07-25 12:13:17.839921] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.773 qpair failed and we were unable to recover it. 00:27:30.773 [2024-07-25 12:13:17.849743] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.773 [2024-07-25 12:13:17.849891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.773 [2024-07-25 12:13:17.849909] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.773 [2024-07-25 12:13:17.849916] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.773 [2024-07-25 12:13:17.849923] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.773 [2024-07-25 12:13:17.849940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.773 qpair failed and we were unable to recover it. 00:27:30.773 [2024-07-25 12:13:17.859755] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.773 [2024-07-25 12:13:17.859910] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.773 [2024-07-25 12:13:17.859932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.773 [2024-07-25 12:13:17.859939] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.773 [2024-07-25 12:13:17.859945] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.773 [2024-07-25 12:13:17.859962] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.773 qpair failed and we were unable to recover it. 00:27:30.773 [2024-07-25 12:13:17.869775] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.773 [2024-07-25 12:13:17.869921] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.773 [2024-07-25 12:13:17.869939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.773 [2024-07-25 12:13:17.869947] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.773 [2024-07-25 12:13:17.869953] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.773 [2024-07-25 12:13:17.869970] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.773 qpair failed and we were unable to recover it. 00:27:30.773 [2024-07-25 12:13:17.879832] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.773 [2024-07-25 12:13:17.879984] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.773 [2024-07-25 12:13:17.880002] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.773 [2024-07-25 12:13:17.880009] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.773 [2024-07-25 12:13:17.880016] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.773 [2024-07-25 12:13:17.880032] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.773 qpair failed and we were unable to recover it. 00:27:30.773 [2024-07-25 12:13:17.889790] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.773 [2024-07-25 12:13:17.889978] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.773 [2024-07-25 12:13:17.889996] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.773 [2024-07-25 12:13:17.890003] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.773 [2024-07-25 12:13:17.890009] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.773 [2024-07-25 12:13:17.890026] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.773 qpair failed and we were unable to recover it. 00:27:30.773 [2024-07-25 12:13:17.899890] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.773 [2024-07-25 12:13:17.900034] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.773 [2024-07-25 12:13:17.900060] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.773 [2024-07-25 12:13:17.900068] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.773 [2024-07-25 12:13:17.900078] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.773 [2024-07-25 12:13:17.900095] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.773 qpair failed and we were unable to recover it. 00:27:30.773 [2024-07-25 12:13:17.909913] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.773 [2024-07-25 12:13:17.910063] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.773 [2024-07-25 12:13:17.910082] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.773 [2024-07-25 12:13:17.910088] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.773 [2024-07-25 12:13:17.910095] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.773 [2024-07-25 12:13:17.910112] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.773 qpair failed and we were unable to recover it. 00:27:30.773 [2024-07-25 12:13:17.919986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.773 [2024-07-25 12:13:17.920161] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.774 [2024-07-25 12:13:17.920180] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.774 [2024-07-25 12:13:17.920187] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.774 [2024-07-25 12:13:17.920194] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.774 [2024-07-25 12:13:17.920210] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.774 qpair failed and we were unable to recover it. 00:27:30.774 [2024-07-25 12:13:17.929966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.774 [2024-07-25 12:13:17.930120] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.774 [2024-07-25 12:13:17.930138] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.774 [2024-07-25 12:13:17.930145] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.774 [2024-07-25 12:13:17.930152] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.774 [2024-07-25 12:13:17.930168] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.774 qpair failed and we were unable to recover it. 00:27:30.774 [2024-07-25 12:13:17.940003] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.774 [2024-07-25 12:13:17.940156] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.774 [2024-07-25 12:13:17.940174] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.774 [2024-07-25 12:13:17.940181] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.774 [2024-07-25 12:13:17.940188] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.774 [2024-07-25 12:13:17.940205] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.774 qpair failed and we were unable to recover it. 00:27:30.774 [2024-07-25 12:13:17.950027] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.774 [2024-07-25 12:13:17.950183] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.774 [2024-07-25 12:13:17.950201] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.774 [2024-07-25 12:13:17.950208] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.774 [2024-07-25 12:13:17.950214] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.774 [2024-07-25 12:13:17.950231] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.774 qpair failed and we were unable to recover it. 00:27:30.774 [2024-07-25 12:13:17.960072] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.774 [2024-07-25 12:13:17.960216] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.774 [2024-07-25 12:13:17.960234] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.774 [2024-07-25 12:13:17.960241] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.774 [2024-07-25 12:13:17.960248] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.774 [2024-07-25 12:13:17.960264] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.774 qpair failed and we were unable to recover it. 00:27:30.774 [2024-07-25 12:13:17.970080] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.774 [2024-07-25 12:13:17.970226] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.774 [2024-07-25 12:13:17.970245] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.774 [2024-07-25 12:13:17.970253] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.774 [2024-07-25 12:13:17.970261] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.774 [2024-07-25 12:13:17.970278] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.774 qpair failed and we were unable to recover it. 00:27:30.774 [2024-07-25 12:13:17.980112] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.774 [2024-07-25 12:13:17.980256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.774 [2024-07-25 12:13:17.980275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.774 [2024-07-25 12:13:17.980283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.774 [2024-07-25 12:13:17.980291] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.774 [2024-07-25 12:13:17.980308] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.774 qpair failed and we were unable to recover it. 00:27:30.774 [2024-07-25 12:13:17.990070] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.774 [2024-07-25 12:13:17.990257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.774 [2024-07-25 12:13:17.990276] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.774 [2024-07-25 12:13:17.990283] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.774 [2024-07-25 12:13:17.990293] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.774 [2024-07-25 12:13:17.990310] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.774 qpair failed and we were unable to recover it. 00:27:30.774 [2024-07-25 12:13:18.000165] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.774 [2024-07-25 12:13:18.000305] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.774 [2024-07-25 12:13:18.000324] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.774 [2024-07-25 12:13:18.000331] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.774 [2024-07-25 12:13:18.000338] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.774 [2024-07-25 12:13:18.000354] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.774 qpair failed and we were unable to recover it. 00:27:30.774 [2024-07-25 12:13:18.010218] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.774 [2024-07-25 12:13:18.010401] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.774 [2024-07-25 12:13:18.010420] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.774 [2024-07-25 12:13:18.010427] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.774 [2024-07-25 12:13:18.010433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.774 [2024-07-25 12:13:18.010450] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.774 qpair failed and we were unable to recover it. 00:27:30.774 [2024-07-25 12:13:18.020250] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:30.774 [2024-07-25 12:13:18.020422] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:30.774 [2024-07-25 12:13:18.020441] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:30.774 [2024-07-25 12:13:18.020448] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:30.774 [2024-07-25 12:13:18.020454] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:30.774 [2024-07-25 12:13:18.020471] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:30.774 qpair failed and we were unable to recover it. 00:27:31.035 [2024-07-25 12:13:18.030253] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.035 [2024-07-25 12:13:18.030399] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.035 [2024-07-25 12:13:18.030418] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.035 [2024-07-25 12:13:18.030425] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.035 [2024-07-25 12:13:18.030433] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.035 [2024-07-25 12:13:18.030449] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.035 qpair failed and we were unable to recover it. 00:27:31.035 [2024-07-25 12:13:18.040310] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.035 [2024-07-25 12:13:18.040464] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.035 [2024-07-25 12:13:18.040483] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.035 [2024-07-25 12:13:18.040490] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.035 [2024-07-25 12:13:18.040496] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.035 [2024-07-25 12:13:18.040512] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.035 qpair failed and we were unable to recover it. 00:27:31.035 [2024-07-25 12:13:18.050314] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.035 [2024-07-25 12:13:18.050461] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.035 [2024-07-25 12:13:18.050480] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.035 [2024-07-25 12:13:18.050488] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.035 [2024-07-25 12:13:18.050494] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.035 [2024-07-25 12:13:18.050510] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.035 qpair failed and we were unable to recover it. 00:27:31.035 [2024-07-25 12:13:18.060334] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.035 [2024-07-25 12:13:18.060482] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.035 [2024-07-25 12:13:18.060501] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.035 [2024-07-25 12:13:18.060508] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.035 [2024-07-25 12:13:18.060515] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.035 [2024-07-25 12:13:18.060531] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.035 qpair failed and we were unable to recover it. 00:27:31.035 [2024-07-25 12:13:18.070364] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.035 [2024-07-25 12:13:18.070505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.035 [2024-07-25 12:13:18.070523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.035 [2024-07-25 12:13:18.070530] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.035 [2024-07-25 12:13:18.070536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.035 [2024-07-25 12:13:18.070553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.035 qpair failed and we were unable to recover it. 00:27:31.035 [2024-07-25 12:13:18.080403] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.035 [2024-07-25 12:13:18.080544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.035 [2024-07-25 12:13:18.080563] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.035 [2024-07-25 12:13:18.080570] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.035 [2024-07-25 12:13:18.080580] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.035 [2024-07-25 12:13:18.080596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.035 qpair failed and we were unable to recover it. 00:27:31.035 [2024-07-25 12:13:18.090360] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.035 [2024-07-25 12:13:18.090505] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.035 [2024-07-25 12:13:18.090523] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.035 [2024-07-25 12:13:18.090531] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.035 [2024-07-25 12:13:18.090536] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.035 [2024-07-25 12:13:18.090553] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.035 qpair failed and we were unable to recover it. 00:27:31.035 [2024-07-25 12:13:18.100446] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.035 [2024-07-25 12:13:18.100594] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.036 [2024-07-25 12:13:18.100612] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.036 [2024-07-25 12:13:18.100619] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.036 [2024-07-25 12:13:18.100625] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.036 [2024-07-25 12:13:18.100642] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.036 qpair failed and we were unable to recover it. 00:27:31.036 [2024-07-25 12:13:18.110467] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.036 [2024-07-25 12:13:18.110612] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.036 [2024-07-25 12:13:18.110631] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.036 [2024-07-25 12:13:18.110638] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.036 [2024-07-25 12:13:18.110644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.036 [2024-07-25 12:13:18.110662] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.036 qpair failed and we were unable to recover it. 00:27:31.036 [2024-07-25 12:13:18.120498] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.036 [2024-07-25 12:13:18.120642] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.036 [2024-07-25 12:13:18.120661] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.036 [2024-07-25 12:13:18.120668] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.036 [2024-07-25 12:13:18.120674] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.036 [2024-07-25 12:13:18.120691] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.036 qpair failed and we were unable to recover it. 00:27:31.036 [2024-07-25 12:13:18.130546] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.036 [2024-07-25 12:13:18.130693] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.036 [2024-07-25 12:13:18.130711] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.036 [2024-07-25 12:13:18.130718] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.036 [2024-07-25 12:13:18.130724] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.036 [2024-07-25 12:13:18.130741] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.036 qpair failed and we were unable to recover it. 00:27:31.036 [2024-07-25 12:13:18.140553] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.036 [2024-07-25 12:13:18.140716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.036 [2024-07-25 12:13:18.140735] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.036 [2024-07-25 12:13:18.140742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.036 [2024-07-25 12:13:18.140748] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.036 [2024-07-25 12:13:18.140765] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.036 qpair failed and we were unable to recover it. 00:27:31.036 [2024-07-25 12:13:18.150576] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.036 [2024-07-25 12:13:18.150835] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.036 [2024-07-25 12:13:18.150896] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.036 [2024-07-25 12:13:18.150905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.036 [2024-07-25 12:13:18.150912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.036 [2024-07-25 12:13:18.150930] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.036 qpair failed and we were unable to recover it. 00:27:31.036 [2024-07-25 12:13:18.160608] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.036 [2024-07-25 12:13:18.160758] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.036 [2024-07-25 12:13:18.160777] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.036 [2024-07-25 12:13:18.160784] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.036 [2024-07-25 12:13:18.160791] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.036 [2024-07-25 12:13:18.160808] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.036 qpair failed and we were unable to recover it. 00:27:31.036 [2024-07-25 12:13:18.170581] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.036 [2024-07-25 12:13:18.170725] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.036 [2024-07-25 12:13:18.170744] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.036 [2024-07-25 12:13:18.170755] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.036 [2024-07-25 12:13:18.170762] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.036 [2024-07-25 12:13:18.170781] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.036 qpair failed and we were unable to recover it. 00:27:31.036 [2024-07-25 12:13:18.180597] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.036 [2024-07-25 12:13:18.180742] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.036 [2024-07-25 12:13:18.180761] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.036 [2024-07-25 12:13:18.180768] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.036 [2024-07-25 12:13:18.180774] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.036 [2024-07-25 12:13:18.180792] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.036 qpair failed and we were unable to recover it. 00:27:31.036 [2024-07-25 12:13:18.190694] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.036 [2024-07-25 12:13:18.190839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.036 [2024-07-25 12:13:18.190857] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.036 [2024-07-25 12:13:18.190865] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.036 [2024-07-25 12:13:18.190872] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.036 [2024-07-25 12:13:18.190889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.036 qpair failed and we were unable to recover it. 00:27:31.036 [2024-07-25 12:13:18.200716] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.036 [2024-07-25 12:13:18.200862] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.036 [2024-07-25 12:13:18.200880] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.036 [2024-07-25 12:13:18.200888] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.036 [2024-07-25 12:13:18.200894] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.036 [2024-07-25 12:13:18.200911] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.036 qpair failed and we were unable to recover it. 00:27:31.036 [2024-07-25 12:13:18.210748] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.036 [2024-07-25 12:13:18.210891] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.036 [2024-07-25 12:13:18.210910] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.036 [2024-07-25 12:13:18.210918] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.036 [2024-07-25 12:13:18.210924] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.036 [2024-07-25 12:13:18.210940] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.036 qpair failed and we were unable to recover it. 00:27:31.036 [2024-07-25 12:13:18.220807] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.036 [2024-07-25 12:13:18.220949] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.036 [2024-07-25 12:13:18.220969] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.036 [2024-07-25 12:13:18.220976] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.036 [2024-07-25 12:13:18.220983] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.036 [2024-07-25 12:13:18.220999] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.036 qpair failed and we were unable to recover it. 00:27:31.036 [2024-07-25 12:13:18.230820] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.036 [2024-07-25 12:13:18.230967] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.036 [2024-07-25 12:13:18.230986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.036 [2024-07-25 12:13:18.230993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.036 [2024-07-25 12:13:18.231000] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.037 [2024-07-25 12:13:18.231016] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.037 qpair failed and we were unable to recover it. 00:27:31.037 [2024-07-25 12:13:18.240831] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.037 [2024-07-25 12:13:18.240968] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.037 [2024-07-25 12:13:18.240986] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.037 [2024-07-25 12:13:18.240993] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.037 [2024-07-25 12:13:18.240999] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.037 [2024-07-25 12:13:18.241015] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.037 qpair failed and we were unable to recover it. 00:27:31.037 [2024-07-25 12:13:18.250875] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.037 [2024-07-25 12:13:18.251023] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.037 [2024-07-25 12:13:18.251041] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.037 [2024-07-25 12:13:18.251054] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.037 [2024-07-25 12:13:18.251061] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.037 [2024-07-25 12:13:18.251079] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.037 qpair failed and we were unable to recover it. 00:27:31.037 [2024-07-25 12:13:18.260814] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.037 [2024-07-25 12:13:18.260963] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.037 [2024-07-25 12:13:18.260981] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.037 [2024-07-25 12:13:18.260992] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.037 [2024-07-25 12:13:18.260998] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.037 [2024-07-25 12:13:18.261014] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.037 qpair failed and we were unable to recover it. 00:27:31.037 [2024-07-25 12:13:18.271106] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.037 [2024-07-25 12:13:18.271249] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.037 [2024-07-25 12:13:18.271267] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.037 [2024-07-25 12:13:18.271275] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.037 [2024-07-25 12:13:18.271281] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.037 [2024-07-25 12:13:18.271298] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.037 qpair failed and we were unable to recover it. 00:27:31.037 [2024-07-25 12:13:18.281045] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.037 [2024-07-25 12:13:18.281188] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.037 [2024-07-25 12:13:18.281206] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.037 [2024-07-25 12:13:18.281213] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.037 [2024-07-25 12:13:18.281220] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.037 [2024-07-25 12:13:18.281237] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.037 qpair failed and we were unable to recover it. 00:27:31.298 [2024-07-25 12:13:18.290992] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.298 [2024-07-25 12:13:18.291142] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.298 [2024-07-25 12:13:18.291161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.298 [2024-07-25 12:13:18.291168] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.298 [2024-07-25 12:13:18.291175] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.298 [2024-07-25 12:13:18.291192] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.298 qpair failed and we were unable to recover it. 00:27:31.298 [2024-07-25 12:13:18.301008] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.298 [2024-07-25 12:13:18.301158] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.298 [2024-07-25 12:13:18.301176] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.298 [2024-07-25 12:13:18.301184] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.298 [2024-07-25 12:13:18.301191] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.298 [2024-07-25 12:13:18.301207] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.298 qpair failed and we were unable to recover it. 00:27:31.298 [2024-07-25 12:13:18.311025] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.298 [2024-07-25 12:13:18.311171] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.298 [2024-07-25 12:13:18.311190] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.298 [2024-07-25 12:13:18.311197] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.298 [2024-07-25 12:13:18.311203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.298 [2024-07-25 12:13:18.311220] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.298 qpair failed and we were unable to recover it. 00:27:31.298 [2024-07-25 12:13:18.321095] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.298 [2024-07-25 12:13:18.321256] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.298 [2024-07-25 12:13:18.321274] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.298 [2024-07-25 12:13:18.321282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.298 [2024-07-25 12:13:18.321288] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.298 [2024-07-25 12:13:18.321306] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.298 qpair failed and we were unable to recover it. 00:27:31.298 [2024-07-25 12:13:18.331111] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.298 [2024-07-25 12:13:18.331266] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.298 [2024-07-25 12:13:18.331284] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.298 [2024-07-25 12:13:18.331292] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.298 [2024-07-25 12:13:18.331298] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.298 [2024-07-25 12:13:18.331315] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.298 qpair failed and we were unable to recover it. 00:27:31.298 [2024-07-25 12:13:18.341132] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.298 [2024-07-25 12:13:18.341276] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.298 [2024-07-25 12:13:18.341294] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.298 [2024-07-25 12:13:18.341301] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.298 [2024-07-25 12:13:18.341308] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.298 [2024-07-25 12:13:18.341324] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.298 qpair failed and we were unable to recover it. 00:27:31.298 [2024-07-25 12:13:18.351171] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.298 [2024-07-25 12:13:18.351317] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.298 [2024-07-25 12:13:18.351336] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.298 [2024-07-25 12:13:18.351347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.298 [2024-07-25 12:13:18.351354] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.298 [2024-07-25 12:13:18.351370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.298 qpair failed and we were unable to recover it. 00:27:31.298 [2024-07-25 12:13:18.361187] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.298 [2024-07-25 12:13:18.361333] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.298 [2024-07-25 12:13:18.361351] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.298 [2024-07-25 12:13:18.361359] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.298 [2024-07-25 12:13:18.361364] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.298 [2024-07-25 12:13:18.361381] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.298 qpair failed and we were unable to recover it. 00:27:31.298 [2024-07-25 12:13:18.371276] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.298 [2024-07-25 12:13:18.371428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.298 [2024-07-25 12:13:18.371446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.298 [2024-07-25 12:13:18.371453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.298 [2024-07-25 12:13:18.371459] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.298 [2024-07-25 12:13:18.371476] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.298 qpair failed and we were unable to recover it. 00:27:31.298 [2024-07-25 12:13:18.381244] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.298 [2024-07-25 12:13:18.381391] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.298 [2024-07-25 12:13:18.381409] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.298 [2024-07-25 12:13:18.381416] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.298 [2024-07-25 12:13:18.381422] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.298 [2024-07-25 12:13:18.381440] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.298 qpair failed and we were unable to recover it. 00:27:31.298 [2024-07-25 12:13:18.391281] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.299 [2024-07-25 12:13:18.391441] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.299 [2024-07-25 12:13:18.391460] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.299 [2024-07-25 12:13:18.391467] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.299 [2024-07-25 12:13:18.391473] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.299 [2024-07-25 12:13:18.391489] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.299 qpair failed and we were unable to recover it. 00:27:31.299 [2024-07-25 12:13:18.401530] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.299 [2024-07-25 12:13:18.401673] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.299 [2024-07-25 12:13:18.401692] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.299 [2024-07-25 12:13:18.401699] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.299 [2024-07-25 12:13:18.401706] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.299 [2024-07-25 12:13:18.401723] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.299 qpair failed and we were unable to recover it. 00:27:31.299 [2024-07-25 12:13:18.411343] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.299 [2024-07-25 12:13:18.411488] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.299 [2024-07-25 12:13:18.411506] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.299 [2024-07-25 12:13:18.411513] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.299 [2024-07-25 12:13:18.411520] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.299 [2024-07-25 12:13:18.411536] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.299 qpair failed and we were unable to recover it. 00:27:31.299 [2024-07-25 12:13:18.421376] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.299 [2024-07-25 12:13:18.421521] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.299 [2024-07-25 12:13:18.421540] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.299 [2024-07-25 12:13:18.421547] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.299 [2024-07-25 12:13:18.421554] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.299 [2024-07-25 12:13:18.421570] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.299 qpair failed and we were unable to recover it. 00:27:31.299 [2024-07-25 12:13:18.431395] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.299 [2024-07-25 12:13:18.431540] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.299 [2024-07-25 12:13:18.431559] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.299 [2024-07-25 12:13:18.431566] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.299 [2024-07-25 12:13:18.431573] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.299 [2024-07-25 12:13:18.431590] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.299 qpair failed and we were unable to recover it. 00:27:31.299 [2024-07-25 12:13:18.441405] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.299 [2024-07-25 12:13:18.441544] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.299 [2024-07-25 12:13:18.441566] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.299 [2024-07-25 12:13:18.441573] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.299 [2024-07-25 12:13:18.441579] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.299 [2024-07-25 12:13:18.441596] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.299 qpair failed and we were unable to recover it. 00:27:31.299 [2024-07-25 12:13:18.451482] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.299 [2024-07-25 12:13:18.451630] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.299 [2024-07-25 12:13:18.451648] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.299 [2024-07-25 12:13:18.451655] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.299 [2024-07-25 12:13:18.451661] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.299 [2024-07-25 12:13:18.451678] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.299 qpair failed and we were unable to recover it. 00:27:31.299 [2024-07-25 12:13:18.461477] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.299 [2024-07-25 12:13:18.461636] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.299 [2024-07-25 12:13:18.461655] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.299 [2024-07-25 12:13:18.461662] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.299 [2024-07-25 12:13:18.461668] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.299 [2024-07-25 12:13:18.461684] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.299 qpair failed and we were unable to recover it. 00:27:31.299 [2024-07-25 12:13:18.471508] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.299 [2024-07-25 12:13:18.471653] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.299 [2024-07-25 12:13:18.471671] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.299 [2024-07-25 12:13:18.471678] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.299 [2024-07-25 12:13:18.471684] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.299 [2024-07-25 12:13:18.471701] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.299 qpair failed and we were unable to recover it. 00:27:31.299 [2024-07-25 12:13:18.481557] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.299 [2024-07-25 12:13:18.481696] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.299 [2024-07-25 12:13:18.481714] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.299 [2024-07-25 12:13:18.481721] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.299 [2024-07-25 12:13:18.481727] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.299 [2024-07-25 12:13:18.481752] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.299 qpair failed and we were unable to recover it. 00:27:31.299 [2024-07-25 12:13:18.491599] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.299 [2024-07-25 12:13:18.491760] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.299 [2024-07-25 12:13:18.491778] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.299 [2024-07-25 12:13:18.491786] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.299 [2024-07-25 12:13:18.491792] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.299 [2024-07-25 12:13:18.491809] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.299 qpair failed and we were unable to recover it. 00:27:31.299 [2024-07-25 12:13:18.501600] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.299 [2024-07-25 12:13:18.501757] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.299 [2024-07-25 12:13:18.501775] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.299 [2024-07-25 12:13:18.501782] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.299 [2024-07-25 12:13:18.501789] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.299 [2024-07-25 12:13:18.501805] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.299 qpair failed and we were unable to recover it. 00:27:31.299 [2024-07-25 12:13:18.511539] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.299 [2024-07-25 12:13:18.511755] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.299 [2024-07-25 12:13:18.511773] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.299 [2024-07-25 12:13:18.511780] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.299 [2024-07-25 12:13:18.511787] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.299 [2024-07-25 12:13:18.511803] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.299 qpair failed and we were unable to recover it. 00:27:31.299 [2024-07-25 12:13:18.521676] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.299 [2024-07-25 12:13:18.521847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.299 [2024-07-25 12:13:18.521865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.300 [2024-07-25 12:13:18.521872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.300 [2024-07-25 12:13:18.521878] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.300 [2024-07-25 12:13:18.521895] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.300 qpair failed and we were unable to recover it. 00:27:31.300 [2024-07-25 12:13:18.531669] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.300 [2024-07-25 12:13:18.531814] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.300 [2024-07-25 12:13:18.531836] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.300 [2024-07-25 12:13:18.531843] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.300 [2024-07-25 12:13:18.531849] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.300 [2024-07-25 12:13:18.531866] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.300 qpair failed and we were unable to recover it. 00:27:31.300 [2024-07-25 12:13:18.541691] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.300 [2024-07-25 12:13:18.541839] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.300 [2024-07-25 12:13:18.541859] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.300 [2024-07-25 12:13:18.541866] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.300 [2024-07-25 12:13:18.541873] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.300 [2024-07-25 12:13:18.541889] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.300 qpair failed and we were unable to recover it. 00:27:31.561 [2024-07-25 12:13:18.551720] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.561 [2024-07-25 12:13:18.551867] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.561 [2024-07-25 12:13:18.551886] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.561 [2024-07-25 12:13:18.551893] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.561 [2024-07-25 12:13:18.551899] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.561 [2024-07-25 12:13:18.551916] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.561 qpair failed and we were unable to recover it. 00:27:31.561 [2024-07-25 12:13:18.561734] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.561 [2024-07-25 12:13:18.561880] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.561 [2024-07-25 12:13:18.561898] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.561 [2024-07-25 12:13:18.561905] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.561 [2024-07-25 12:13:18.561912] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.561 [2024-07-25 12:13:18.561928] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.561 qpair failed and we were unable to recover it. 00:27:31.561 [2024-07-25 12:13:18.571778] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.561 [2024-07-25 12:13:18.571920] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.561 [2024-07-25 12:13:18.571939] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.561 [2024-07-25 12:13:18.571946] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.561 [2024-07-25 12:13:18.571952] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.561 [2024-07-25 12:13:18.571973] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.561 qpair failed and we were unable to recover it. 00:27:31.561 [2024-07-25 12:13:18.581806] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.561 [2024-07-25 12:13:18.581948] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.561 [2024-07-25 12:13:18.581967] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.561 [2024-07-25 12:13:18.581974] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.561 [2024-07-25 12:13:18.581981] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.561 [2024-07-25 12:13:18.581997] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.561 qpair failed and we were unable to recover it. 00:27:31.561 [2024-07-25 12:13:18.591837] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.561 [2024-07-25 12:13:18.591980] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.561 [2024-07-25 12:13:18.591998] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.561 [2024-07-25 12:13:18.592005] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.561 [2024-07-25 12:13:18.592012] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.561 [2024-07-25 12:13:18.592028] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.561 qpair failed and we were unable to recover it. 00:27:31.561 [2024-07-25 12:13:18.601870] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.561 [2024-07-25 12:13:18.602016] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.561 [2024-07-25 12:13:18.602034] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.561 [2024-07-25 12:13:18.602046] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.561 [2024-07-25 12:13:18.602053] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.561 [2024-07-25 12:13:18.602070] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.561 qpair failed and we were unable to recover it. 00:27:31.561 [2024-07-25 12:13:18.611938] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.561 [2024-07-25 12:13:18.612097] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.561 [2024-07-25 12:13:18.612116] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.561 [2024-07-25 12:13:18.612123] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.561 [2024-07-25 12:13:18.612129] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.561 [2024-07-25 12:13:18.612146] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.561 qpair failed and we were unable to recover it. 00:27:31.561 [2024-07-25 12:13:18.621900] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.561 [2024-07-25 12:13:18.622052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.561 [2024-07-25 12:13:18.622074] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.561 [2024-07-25 12:13:18.622081] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.561 [2024-07-25 12:13:18.622087] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.561 [2024-07-25 12:13:18.622105] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.561 qpair failed and we were unable to recover it. 00:27:31.561 [2024-07-25 12:13:18.631986] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.561 [2024-07-25 12:13:18.632157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.561 [2024-07-25 12:13:18.632175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.561 [2024-07-25 12:13:18.632182] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.561 [2024-07-25 12:13:18.632200] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.561 [2024-07-25 12:13:18.632217] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.561 qpair failed and we were unable to recover it. 00:27:31.561 [2024-07-25 12:13:18.641991] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.561 [2024-07-25 12:13:18.642222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.561 [2024-07-25 12:13:18.642240] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.561 [2024-07-25 12:13:18.642248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.561 [2024-07-25 12:13:18.642254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.561 [2024-07-25 12:13:18.642270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.561 qpair failed and we were unable to recover it. 00:27:31.561 [2024-07-25 12:13:18.652020] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.561 [2024-07-25 12:13:18.652170] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.561 [2024-07-25 12:13:18.652188] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.561 [2024-07-25 12:13:18.652195] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.561 [2024-07-25 12:13:18.652203] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.562 [2024-07-25 12:13:18.652219] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.562 qpair failed and we were unable to recover it. 00:27:31.562 [2024-07-25 12:13:18.662004] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.562 [2024-07-25 12:13:18.662157] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.562 [2024-07-25 12:13:18.662175] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.562 [2024-07-25 12:13:18.662183] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.562 [2024-07-25 12:13:18.662189] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.562 [2024-07-25 12:13:18.662209] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.562 qpair failed and we were unable to recover it. 00:27:31.562 [2024-07-25 12:13:18.672058] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.562 [2024-07-25 12:13:18.672201] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.562 [2024-07-25 12:13:18.672220] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.562 [2024-07-25 12:13:18.672227] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.562 [2024-07-25 12:13:18.672234] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.562 [2024-07-25 12:13:18.672251] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.562 qpair failed and we were unable to recover it. 00:27:31.562 [2024-07-25 12:13:18.682099] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.562 [2024-07-25 12:13:18.682240] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.562 [2024-07-25 12:13:18.682259] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.562 [2024-07-25 12:13:18.682266] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.562 [2024-07-25 12:13:18.682272] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.562 [2024-07-25 12:13:18.682288] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.562 qpair failed and we were unable to recover it. 00:27:31.562 [2024-07-25 12:13:18.692108] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.562 [2024-07-25 12:13:18.692257] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.562 [2024-07-25 12:13:18.692275] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.562 [2024-07-25 12:13:18.692282] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.562 [2024-07-25 12:13:18.692289] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.562 [2024-07-25 12:13:18.692305] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.562 qpair failed and we were unable to recover it. 00:27:31.562 [2024-07-25 12:13:18.702135] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.562 [2024-07-25 12:13:18.702280] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.562 [2024-07-25 12:13:18.702298] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.562 [2024-07-25 12:13:18.702305] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.562 [2024-07-25 12:13:18.702312] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.562 [2024-07-25 12:13:18.702329] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.562 qpair failed and we were unable to recover it. 00:27:31.562 [2024-07-25 12:13:18.712138] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.562 [2024-07-25 12:13:18.712279] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.562 [2024-07-25 12:13:18.712301] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.562 [2024-07-25 12:13:18.712308] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.562 [2024-07-25 12:13:18.712314] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.562 [2024-07-25 12:13:18.712331] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.562 qpair failed and we were unable to recover it. 00:27:31.562 [2024-07-25 12:13:18.722185] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.562 [2024-07-25 12:13:18.722324] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.562 [2024-07-25 12:13:18.722343] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.562 [2024-07-25 12:13:18.722350] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.562 [2024-07-25 12:13:18.722356] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.562 [2024-07-25 12:13:18.722372] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.562 qpair failed and we were unable to recover it. 00:27:31.562 [2024-07-25 12:13:18.732228] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.562 [2024-07-25 12:13:18.732371] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.562 [2024-07-25 12:13:18.732389] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.562 [2024-07-25 12:13:18.732396] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.562 [2024-07-25 12:13:18.732403] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.562 [2024-07-25 12:13:18.732419] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.562 qpair failed and we were unable to recover it. 00:27:31.562 [2024-07-25 12:13:18.742245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.562 [2024-07-25 12:13:18.742406] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.562 [2024-07-25 12:13:18.742425] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.562 [2024-07-25 12:13:18.742431] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.562 [2024-07-25 12:13:18.742437] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.562 [2024-07-25 12:13:18.742454] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.562 qpair failed and we were unable to recover it. 00:27:31.562 [2024-07-25 12:13:18.752286] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.562 [2024-07-25 12:13:18.752428] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.562 [2024-07-25 12:13:18.752446] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.562 [2024-07-25 12:13:18.752453] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.562 [2024-07-25 12:13:18.752462] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.562 [2024-07-25 12:13:18.752479] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.562 qpair failed and we were unable to recover it. 00:27:31.562 [2024-07-25 12:13:18.762302] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.562 [2024-07-25 12:13:18.762446] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.562 [2024-07-25 12:13:18.762465] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.562 [2024-07-25 12:13:18.762472] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.562 [2024-07-25 12:13:18.762478] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.562 [2024-07-25 12:13:18.762495] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.562 qpair failed and we were unable to recover it. 00:27:31.562 [2024-07-25 12:13:18.772357] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.562 [2024-07-25 12:13:18.772503] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.562 [2024-07-25 12:13:18.772521] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.562 [2024-07-25 12:13:18.772528] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.562 [2024-07-25 12:13:18.772534] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.562 [2024-07-25 12:13:18.772551] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.562 qpair failed and we were unable to recover it. 00:27:31.562 [2024-07-25 12:13:18.782341] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.562 [2024-07-25 12:13:18.782485] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.562 [2024-07-25 12:13:18.782504] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.562 [2024-07-25 12:13:18.782511] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.562 [2024-07-25 12:13:18.782517] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.562 [2024-07-25 12:13:18.782533] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.562 qpair failed and we were unable to recover it. 00:27:31.562 [2024-07-25 12:13:18.792396] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.563 [2024-07-25 12:13:18.792558] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.563 [2024-07-25 12:13:18.792577] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.563 [2024-07-25 12:13:18.792584] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.563 [2024-07-25 12:13:18.792590] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.563 [2024-07-25 12:13:18.792607] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.563 qpair failed and we were unable to recover it. 00:27:31.563 [2024-07-25 12:13:18.802416] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.563 [2024-07-25 12:13:18.802562] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.563 [2024-07-25 12:13:18.802581] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.563 [2024-07-25 12:13:18.802588] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.563 [2024-07-25 12:13:18.802594] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.563 [2024-07-25 12:13:18.802610] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.563 qpair failed and we were unable to recover it. 00:27:31.824 [2024-07-25 12:13:18.812457] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.824 [2024-07-25 12:13:18.812598] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.824 [2024-07-25 12:13:18.812617] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.824 [2024-07-25 12:13:18.812624] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.824 [2024-07-25 12:13:18.812630] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.824 [2024-07-25 12:13:18.812646] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.824 qpair failed and we were unable to recover it. 00:27:31.824 [2024-07-25 12:13:18.822406] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.824 [2024-07-25 12:13:18.822556] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.824 [2024-07-25 12:13:18.822574] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.824 [2024-07-25 12:13:18.822581] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.824 [2024-07-25 12:13:18.822588] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.824 [2024-07-25 12:13:18.822604] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.824 qpair failed and we were unable to recover it. 00:27:31.824 [2024-07-25 12:13:18.832504] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.824 [2024-07-25 12:13:18.832650] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.824 [2024-07-25 12:13:18.832668] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.824 [2024-07-25 12:13:18.832675] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.824 [2024-07-25 12:13:18.832681] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.824 [2024-07-25 12:13:18.832698] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.824 qpair failed and we were unable to recover it. 00:27:31.824 [2024-07-25 12:13:18.842534] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.824 [2024-07-25 12:13:18.842679] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.824 [2024-07-25 12:13:18.842698] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.824 [2024-07-25 12:13:18.842705] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.824 [2024-07-25 12:13:18.842715] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.824 [2024-07-25 12:13:18.842732] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.824 qpair failed and we were unable to recover it. 00:27:31.824 [2024-07-25 12:13:18.852565] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.824 [2024-07-25 12:13:18.852716] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.824 [2024-07-25 12:13:18.852734] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.824 [2024-07-25 12:13:18.852742] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.824 [2024-07-25 12:13:18.852750] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.824 [2024-07-25 12:13:18.852766] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.824 qpair failed and we were unable to recover it. 00:27:31.824 [2024-07-25 12:13:18.862586] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.824 [2024-07-25 12:13:18.862733] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.824 [2024-07-25 12:13:18.862750] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.824 [2024-07-25 12:13:18.862757] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.824 [2024-07-25 12:13:18.862763] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.824 [2024-07-25 12:13:18.862779] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.824 qpair failed and we were unable to recover it. 00:27:31.824 [2024-07-25 12:13:18.872614] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.824 [2024-07-25 12:13:18.872795] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.824 [2024-07-25 12:13:18.872814] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.824 [2024-07-25 12:13:18.872821] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.824 [2024-07-25 12:13:18.872828] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.824 [2024-07-25 12:13:18.872844] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.824 qpair failed and we were unable to recover it. 00:27:31.824 [2024-07-25 12:13:18.882650] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.824 [2024-07-25 12:13:18.882794] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.824 [2024-07-25 12:13:18.882813] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.824 [2024-07-25 12:13:18.882820] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.824 [2024-07-25 12:13:18.882826] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.824 [2024-07-25 12:13:18.882843] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.824 qpair failed and we were unable to recover it. 00:27:31.824 [2024-07-25 12:13:18.892756] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.824 [2024-07-25 12:13:18.892906] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.824 [2024-07-25 12:13:18.892925] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.824 [2024-07-25 12:13:18.892932] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.824 [2024-07-25 12:13:18.892939] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.824 [2024-07-25 12:13:18.892956] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.824 qpair failed and we were unable to recover it. 00:27:31.824 [2024-07-25 12:13:18.902641] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.824 [2024-07-25 12:13:18.902787] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.824 [2024-07-25 12:13:18.902806] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.824 [2024-07-25 12:13:18.902813] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.824 [2024-07-25 12:13:18.902819] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.824 [2024-07-25 12:13:18.902836] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.824 qpair failed and we were unable to recover it. 00:27:31.824 [2024-07-25 12:13:18.912665] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.824 [2024-07-25 12:13:18.912847] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.824 [2024-07-25 12:13:18.912865] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.824 [2024-07-25 12:13:18.912872] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.824 [2024-07-25 12:13:18.912880] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.824 [2024-07-25 12:13:18.912897] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.824 qpair failed and we were unable to recover it. 00:27:31.825 [2024-07-25 12:13:18.922773] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.825 [2024-07-25 12:13:18.922913] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.825 [2024-07-25 12:13:18.922932] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.825 [2024-07-25 12:13:18.922938] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.825 [2024-07-25 12:13:18.922944] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.825 [2024-07-25 12:13:18.922961] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.825 qpair failed and we were unable to recover it. 00:27:31.825 [2024-07-25 12:13:18.932788] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.825 [2024-07-25 12:13:18.932934] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.825 [2024-07-25 12:13:18.932953] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.825 [2024-07-25 12:13:18.932964] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.825 [2024-07-25 12:13:18.932970] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.825 [2024-07-25 12:13:18.932988] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.825 qpair failed and we were unable to recover it. 00:27:31.825 [2024-07-25 12:13:18.942809] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.825 [2024-07-25 12:13:18.942959] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.825 [2024-07-25 12:13:18.942977] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.825 [2024-07-25 12:13:18.942983] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.825 [2024-07-25 12:13:18.942989] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.825 [2024-07-25 12:13:18.943007] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.825 qpair failed and we were unable to recover it. 00:27:31.825 [2024-07-25 12:13:18.952842] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.825 [2024-07-25 12:13:18.952989] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.825 [2024-07-25 12:13:18.953007] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.825 [2024-07-25 12:13:18.953014] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.825 [2024-07-25 12:13:18.953021] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.825 [2024-07-25 12:13:18.953037] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.825 qpair failed and we were unable to recover it. 00:27:31.825 [2024-07-25 12:13:18.962893] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.825 [2024-07-25 12:13:18.963052] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.825 [2024-07-25 12:13:18.963071] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.825 [2024-07-25 12:13:18.963078] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.825 [2024-07-25 12:13:18.963086] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.825 [2024-07-25 12:13:18.963103] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.825 qpair failed and we were unable to recover it. 00:27:31.825 [2024-07-25 12:13:18.972917] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.825 [2024-07-25 12:13:18.973069] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.825 [2024-07-25 12:13:18.973088] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.825 [2024-07-25 12:13:18.973095] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.825 [2024-07-25 12:13:18.973102] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.825 [2024-07-25 12:13:18.973118] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.825 qpair failed and we were unable to recover it. 00:27:31.825 [2024-07-25 12:13:18.982894] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.825 [2024-07-25 12:13:18.983039] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.825 [2024-07-25 12:13:18.983063] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.825 [2024-07-25 12:13:18.983070] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.825 [2024-07-25 12:13:18.983077] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.825 [2024-07-25 12:13:18.983093] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.825 qpair failed and we were unable to recover it. 00:27:31.825 [2024-07-25 12:13:18.992949] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.825 [2024-07-25 12:13:18.993104] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.825 [2024-07-25 12:13:18.993123] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.825 [2024-07-25 12:13:18.993130] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.825 [2024-07-25 12:13:18.993136] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.825 [2024-07-25 12:13:18.993153] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.825 qpair failed and we were unable to recover it. 00:27:31.825 [2024-07-25 12:13:19.002944] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.825 [2024-07-25 12:13:19.003096] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.825 [2024-07-25 12:13:19.003114] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.825 [2024-07-25 12:13:19.003124] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.825 [2024-07-25 12:13:19.003130] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.825 [2024-07-25 12:13:19.003147] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.825 qpair failed and we were unable to recover it. 00:27:31.825 [2024-07-25 12:13:19.012966] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.825 [2024-07-25 12:13:19.013118] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.825 [2024-07-25 12:13:19.013136] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.825 [2024-07-25 12:13:19.013144] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.825 [2024-07-25 12:13:19.013151] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.825 [2024-07-25 12:13:19.013167] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.825 qpair failed and we were unable to recover it. 00:27:31.825 [2024-07-25 12:13:19.022989] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.825 [2024-07-25 12:13:19.023143] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.825 [2024-07-25 12:13:19.023161] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.825 [2024-07-25 12:13:19.023173] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.825 [2024-07-25 12:13:19.023179] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.825 [2024-07-25 12:13:19.023197] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.825 qpair failed and we were unable to recover it. 00:27:31.825 [2024-07-25 12:13:19.033079] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.825 [2024-07-25 12:13:19.033222] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.825 [2024-07-25 12:13:19.033241] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.825 [2024-07-25 12:13:19.033248] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.825 [2024-07-25 12:13:19.033254] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.825 [2024-07-25 12:13:19.033270] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.825 qpair failed and we were unable to recover it. 00:27:31.825 [2024-07-25 12:13:19.043120] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.825 [2024-07-25 12:13:19.043263] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.825 [2024-07-25 12:13:19.043281] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.825 [2024-07-25 12:13:19.043289] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.825 [2024-07-25 12:13:19.043296] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.825 [2024-07-25 12:13:19.043312] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.825 qpair failed and we were unable to recover it. 00:27:31.825 [2024-07-25 12:13:19.053152] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.826 [2024-07-25 12:13:19.053312] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.826 [2024-07-25 12:13:19.053331] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.826 [2024-07-25 12:13:19.053338] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.826 [2024-07-25 12:13:19.053345] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.826 [2024-07-25 12:13:19.053362] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.826 qpair failed and we were unable to recover it. 00:27:31.826 [2024-07-25 12:13:19.063176] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:31.826 [2024-07-25 12:13:19.063320] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:31.826 [2024-07-25 12:13:19.063339] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:31.826 [2024-07-25 12:13:19.063347] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:31.826 [2024-07-25 12:13:19.063353] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:31.826 [2024-07-25 12:13:19.063370] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:31.826 qpair failed and we were unable to recover it. 00:27:32.086 [2024-07-25 12:13:19.073126] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.086 [2024-07-25 12:13:19.073283] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.086 [2024-07-25 12:13:19.073302] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.086 [2024-07-25 12:13:19.073309] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.086 [2024-07-25 12:13:19.073316] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:32.086 [2024-07-25 12:13:19.073333] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.086 qpair failed and we were unable to recover it. 00:27:32.086 [2024-07-25 12:13:19.083245] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.086 [2024-07-25 12:13:19.083388] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.086 [2024-07-25 12:13:19.083406] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.086 [2024-07-25 12:13:19.083413] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.086 [2024-07-25 12:13:19.083420] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:32.086 [2024-07-25 12:13:19.083436] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.086 qpair failed and we were unable to recover it. 00:27:32.086 [2024-07-25 12:13:19.093287] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.086 [2024-07-25 12:13:19.093431] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.086 [2024-07-25 12:13:19.093449] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.086 [2024-07-25 12:13:19.093457] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.086 [2024-07-25 12:13:19.093463] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:32.086 [2024-07-25 12:13:19.093480] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.086 qpair failed and we were unable to recover it. 00:27:32.086 [2024-07-25 12:13:19.103311] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.086 [2024-07-25 12:13:19.103459] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.086 [2024-07-25 12:13:19.103478] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.086 [2024-07-25 12:13:19.103485] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.086 [2024-07-25 12:13:19.103492] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:32.086 [2024-07-25 12:13:19.103509] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.086 qpair failed and we were unable to recover it. 00:27:32.086 [2024-07-25 12:13:19.113240] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.086 [2024-07-25 12:13:19.113380] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.086 [2024-07-25 12:13:19.113398] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.086 [2024-07-25 12:13:19.113409] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.086 [2024-07-25 12:13:19.113415] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:32.086 [2024-07-25 12:13:19.113432] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.086 qpair failed and we were unable to recover it. 00:27:32.086 [2024-07-25 12:13:19.123345] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.086 [2024-07-25 12:13:19.123489] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.086 [2024-07-25 12:13:19.123508] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.086 [2024-07-25 12:13:19.123516] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.086 [2024-07-25 12:13:19.123522] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x23e8f30 00:27:32.086 [2024-07-25 12:13:19.123539] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 3 00:27:32.086 qpair failed and we were unable to recover it. 00:27:32.086 [2024-07-25 12:13:19.133401] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.086 [2024-07-25 12:13:19.133592] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.086 [2024-07-25 12:13:19.133621] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.086 [2024-07-25 12:13:19.133634] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.086 [2024-07-25 12:13:19.133644] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9144000b90 00:27:32.086 [2024-07-25 12:13:19.133669] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:32.086 qpair failed and we were unable to recover it. 00:27:32.086 [2024-07-25 12:13:19.143392] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.086 [2024-07-25 12:13:19.143550] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.086 [2024-07-25 12:13:19.143569] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.086 [2024-07-25 12:13:19.143577] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.086 [2024-07-25 12:13:19.143585] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9144000b90 00:27:32.086 [2024-07-25 12:13:19.143603] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 4 00:27:32.086 qpair failed and we were unable to recover it. 00:27:32.086 [2024-07-25 12:13:19.153427] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.086 [2024-07-25 12:13:19.153575] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.086 [2024-07-25 12:13:19.153598] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.086 [2024-07-25 12:13:19.153607] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.086 [2024-07-25 12:13:19.153614] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:32.086 [2024-07-25 12:13:19.153633] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.086 qpair failed and we were unable to recover it. 00:27:32.086 [2024-07-25 12:13:19.163459] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.086 [2024-07-25 12:13:19.163600] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.086 [2024-07-25 12:13:19.163619] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.086 [2024-07-25 12:13:19.163627] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.086 [2024-07-25 12:13:19.163633] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f914c000b90 00:27:32.087 [2024-07-25 12:13:19.163652] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 2 00:27:32.087 qpair failed and we were unable to recover it. 00:27:32.087 [2024-07-25 12:13:19.163792] nvme_ctrlr.c:4476:nvme_ctrlr_keep_alive: *ERROR*: [nqn.2016-06.io.spdk:cnode1] Submitting Keep Alive failed 00:27:32.087 A controller has encountered a failure and is being reset. 00:27:32.087 [2024-07-25 12:13:19.173548] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.087 [2024-07-25 12:13:19.173735] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.087 [2024-07-25 12:13:19.173765] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.087 [2024-07-25 12:13:19.173776] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.087 [2024-07-25 12:13:19.173786] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9154000b90 00:27:32.087 [2024-07-25 12:13:19.173810] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.087 qpair failed and we were unable to recover it. 00:27:32.087 [2024-07-25 12:13:19.183464] ctrlr.c: 761:_nvmf_ctrlr_add_io_qpair: *ERROR*: Unknown controller ID 0x1 00:27:32.087 [2024-07-25 12:13:19.183615] nvme_fabric.c: 600:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command failed, rc -5, trtype:TCP adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1 00:27:32.087 [2024-07-25 12:13:19.183634] nvme_fabric.c: 611:_nvme_fabric_qpair_connect_poll: *ERROR*: Connect command completed with error: sct 1, sc 130 00:27:32.087 [2024-07-25 12:13:19.183642] nvme_tcp.c:2435:nvme_tcp_ctrlr_connect_qpair_poll: *ERROR*: Failed to poll NVMe-oF Fabric CONNECT command 00:27:32.087 [2024-07-25 12:13:19.183649] nvme_tcp.c:2225:nvme_tcp_qpair_process_completions: *ERROR*: Failed to connect tqpair=0x7f9154000b90 00:27:32.087 [2024-07-25 12:13:19.183667] nvme_qpair.c: 804:spdk_nvme_qpair_process_completions: *ERROR*: CQ transport error -6 (No such device or address) on qpair id 1 00:27:32.087 qpair failed and we were unable to recover it. 00:27:32.087 [2024-07-25 12:13:19.183771] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x23f6ff0 (9): Bad file descriptor 00:27:32.087 Controller properly reset. 00:27:32.087 Initializing NVMe Controllers 00:27:32.087 Attaching to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:32.087 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:cnode1 00:27:32.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 0 00:27:32.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 1 00:27:32.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 2 00:27:32.087 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:cnode1) with lcore 3 00:27:32.087 Initialization complete. Launching workers. 00:27:32.087 Starting thread on core 1 00:27:32.087 Starting thread on core 2 00:27:32.087 Starting thread on core 3 00:27:32.087 Starting thread on core 0 00:27:32.087 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- host/target_disconnect.sh@51 -- # sync 00:27:32.347 00:27:32.347 real 0m11.511s 00:27:32.347 user 0m20.433s 00:27:32.347 sys 0m4.393s 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect.nvmf_target_disconnect_tc2 -- common/autotest_common.sh@10 -- # set +x 00:27:32.347 ************************************ 00:27:32.347 END TEST nvmf_target_disconnect_tc2 00:27:32.347 ************************************ 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1142 -- # return 0 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@72 -- # '[' -n '' ']' 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- host/target_disconnect.sh@77 -- # nvmftestfini 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@488 -- # nvmfcleanup 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@117 -- # sync 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@120 -- # set +e 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@121 -- # for i in {1..20} 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:27:32.347 rmmod nvme_tcp 00:27:32.347 rmmod nvme_fabrics 00:27:32.347 rmmod nvme_keyring 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@124 -- # set -e 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@125 -- # return 0 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@489 -- # '[' -n 485631 ']' 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@490 -- # killprocess 485631 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@948 -- # '[' -z 485631 ']' 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@952 -- # kill -0 485631 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # uname 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 485631 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@954 -- # process_name=reactor_4 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@958 -- # '[' reactor_4 = sudo ']' 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@966 -- # echo 'killing process with pid 485631' 00:27:32.347 killing process with pid 485631 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@967 -- # kill 485631 00:27:32.347 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@972 -- # wait 485631 00:27:32.606 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:27:32.606 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:27:32.606 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:27:32.606 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:27:32.606 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@278 -- # remove_spdk_ns 00:27:32.606 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:32.606 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 15> /dev/null' 00:27:32.606 12:13:19 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:34.513 12:13:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:27:34.513 00:27:34.513 real 0m19.294s 00:27:34.513 user 0m48.299s 00:27:34.513 sys 0m8.607s 00:27:34.513 12:13:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:34.513 12:13:21 nvmf_tcp.nvmf_host.nvmf_target_disconnect -- common/autotest_common.sh@10 -- # set +x 00:27:34.513 ************************************ 00:27:34.513 END TEST nvmf_target_disconnect 00:27:34.513 ************************************ 00:27:34.773 12:13:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1142 -- # return 0 00:27:34.773 12:13:21 nvmf_tcp.nvmf_host -- nvmf/nvmf_host.sh@51 -- # trap - SIGINT SIGTERM EXIT 00:27:34.773 00:27:34.773 real 5m44.623s 00:27:34.773 user 10m50.961s 00:27:34.773 sys 1m44.202s 00:27:34.773 12:13:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:34.773 12:13:21 nvmf_tcp.nvmf_host -- common/autotest_common.sh@10 -- # set +x 00:27:34.773 ************************************ 00:27:34.773 END TEST nvmf_host 00:27:34.773 ************************************ 00:27:34.773 12:13:21 nvmf_tcp -- common/autotest_common.sh@1142 -- # return 0 00:27:34.773 00:27:34.773 real 20m58.247s 00:27:34.773 user 45m23.899s 00:27:34.773 sys 6m14.546s 00:27:34.773 12:13:21 nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:34.773 12:13:21 nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:34.773 ************************************ 00:27:34.773 END TEST nvmf_tcp 00:27:34.773 ************************************ 00:27:34.773 12:13:21 -- common/autotest_common.sh@1142 -- # return 0 00:27:34.773 12:13:21 -- spdk/autotest.sh@288 -- # [[ 0 -eq 0 ]] 00:27:34.773 12:13:21 -- spdk/autotest.sh@289 -- # run_test spdkcli_nvmf_tcp /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:34.773 12:13:21 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:34.773 12:13:21 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:34.773 12:13:21 -- common/autotest_common.sh@10 -- # set +x 00:27:34.773 ************************************ 00:27:34.773 START TEST spdkcli_nvmf_tcp 00:27:34.773 ************************************ 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/nvmf.sh --transport=tcp 00:27:34.773 * Looking for test storage... 00:27:34.773 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/common.sh 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/json_config/clear_config.py 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # uname -s 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- paths/export.sh@5 -- # export PATH 00:27:34.773 12:13:21 spdkcli_nvmf_tcp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:34.774 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@47 -- # : 0 00:27:34.774 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:34.774 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:34.774 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:34.774 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:34.774 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:34.774 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:34.774 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:34.774 12:13:21 spdkcli_nvmf_tcp -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:34.774 12:13:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@12 -- # MATCH_FILE=spdkcli_nvmf.test 00:27:34.774 12:13:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@13 -- # SPDKCLI_BRANCH=/nvmf 00:27:34.774 12:13:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@15 -- # trap cleanup EXIT 00:27:34.774 12:13:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@17 -- # timing_enter run_nvmf_tgt 00:27:34.774 12:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:34.774 12:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:34.774 12:13:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@18 -- # run_nvmf_tgt 00:27:34.774 12:13:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@33 -- # nvmf_tgt_pid=487320 00:27:34.774 12:13:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@34 -- # waitforlisten 487320 00:27:34.774 12:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@829 -- # '[' -z 487320 ']' 00:27:34.774 12:13:22 spdkcli_nvmf_tcp -- spdkcli/common.sh@32 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -m 0x3 -p 0 00:27:34.774 12:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:27:34.774 12:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@834 -- # local max_retries=100 00:27:34.774 12:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:27:34.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:27:34.774 12:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@838 -- # xtrace_disable 00:27:34.774 12:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:35.033 [2024-07-25 12:13:22.055916] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:27:35.033 [2024-07-25 12:13:22.055967] [ DPDK EAL parameters: nvmf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid487320 ] 00:27:35.033 EAL: No free 2048 kB hugepages reported on node 1 00:27:35.033 [2024-07-25 12:13:22.111649] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 2 00:27:35.033 [2024-07-25 12:13:22.185812] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:27:35.033 [2024-07-25 12:13:22.185815] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:27:35.602 12:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:27:35.602 12:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@862 -- # return 0 00:27:35.602 12:13:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@19 -- # timing_exit run_nvmf_tgt 00:27:35.602 12:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:35.602 12:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:35.862 12:13:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@21 -- # NVMF_TARGET_IP=127.0.0.1 00:27:35.862 12:13:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@22 -- # [[ tcp == \r\d\m\a ]] 00:27:35.862 12:13:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@27 -- # timing_enter spdkcli_create_nvmf_config 00:27:35.862 12:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:35.862 12:13:22 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:35.862 12:13:22 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@65 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/bdevs/malloc create 32 512 Malloc1'\'' '\''Malloc1'\'' True 00:27:35.862 '\''/bdevs/malloc create 32 512 Malloc2'\'' '\''Malloc2'\'' True 00:27:35.862 '\''/bdevs/malloc create 32 512 Malloc3'\'' '\''Malloc3'\'' True 00:27:35.862 '\''/bdevs/malloc create 32 512 Malloc4'\'' '\''Malloc4'\'' True 00:27:35.862 '\''/bdevs/malloc create 32 512 Malloc5'\'' '\''Malloc5'\'' True 00:27:35.862 '\''/bdevs/malloc create 32 512 Malloc6'\'' '\''Malloc6'\'' True 00:27:35.862 '\''nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192'\'' '\'''\'' True 00:27:35.862 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:35.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1'\'' '\''Malloc3'\'' True 00:27:35.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2'\'' '\''Malloc4'\'' True 00:27:35.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:35.862 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:35.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2'\'' '\''Malloc2'\'' True 00:27:35.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:35.862 '\''/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:35.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1'\'' '\''Malloc1'\'' True 00:27:35.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4'\'' '\''127.0.0.1:4260'\'' True 00:27:35.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:35.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' True 00:27:35.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:35.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True'\'' '\''Allow any host'\'' 00:27:35.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False'\'' '\''Allow any host'\'' True 00:27:35.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4'\'' '\''127.0.0.1:4261'\'' True 00:27:35.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4'\'' '\''127.0.0.1:4262'\'' True 00:27:35.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' True 00:27:35.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5'\'' '\''Malloc5'\'' True 00:27:35.862 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6'\'' '\''Malloc6'\'' True 00:27:35.862 '\''/nvmf/referral create tcp 127.0.0.2 4030 IPv4'\'' 00:27:35.862 ' 00:27:38.399 [2024-07-25 12:13:25.263546] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:27:39.337 [2024-07-25 12:13:26.439501] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4260 *** 00:27:41.937 [2024-07-25 12:13:28.606231] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4261 *** 00:27:43.316 [2024-07-25 12:13:30.472223] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4262 *** 00:27:44.696 Executing command: ['/bdevs/malloc create 32 512 Malloc1', 'Malloc1', True] 00:27:44.696 Executing command: ['/bdevs/malloc create 32 512 Malloc2', 'Malloc2', True] 00:27:44.696 Executing command: ['/bdevs/malloc create 32 512 Malloc3', 'Malloc3', True] 00:27:44.696 Executing command: ['/bdevs/malloc create 32 512 Malloc4', 'Malloc4', True] 00:27:44.696 Executing command: ['/bdevs/malloc create 32 512 Malloc5', 'Malloc5', True] 00:27:44.696 Executing command: ['/bdevs/malloc create 32 512 Malloc6', 'Malloc6', True] 00:27:44.696 Executing command: ['nvmf/transport create tcp max_io_qpairs_per_ctrlr=4 io_unit_size=8192', '', True] 00:27:44.696 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode1 N37SXV509SRW max_namespaces=4 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:44.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc3 1', 'Malloc3', True] 00:27:44.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc4 2', 'Malloc4', True] 00:27:44.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:44.696 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode2 N37SXV509SRD max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:44.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/namespaces create Malloc2', 'Malloc2', True] 00:27:44.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode2/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:44.696 Executing command: ['/nvmf/subsystem create nqn.2014-08.org.spdk:cnode3 N37SXV509SRR max_namespaces=2 allow_any_host=True', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:44.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/namespaces create Malloc1', 'Malloc1', True] 00:27:44.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4260 IPv4', '127.0.0.1:4260', True] 00:27:44.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:44.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode1', 'nqn.2014-08.org.spdk:cnode1', True] 00:27:44.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:44.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host True', 'Allow any host', False] 00:27:44.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1 allow_any_host False', 'Allow any host', True] 00:27:44.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4261 IPv4', '127.0.0.1:4261', True] 00:27:44.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses create tcp 127.0.0.1 4262 IPv4', '127.0.0.1:4262', True] 00:27:44.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts create nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', True] 00:27:44.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc5', 'Malloc5', True] 00:27:44.696 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces create Malloc6', 'Malloc6', True] 00:27:44.696 Executing command: ['/nvmf/referral create tcp 127.0.0.2 4030 IPv4', False] 00:27:44.955 12:13:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@66 -- # timing_exit spdkcli_create_nvmf_config 00:27:44.955 12:13:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:44.955 12:13:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:44.955 12:13:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@68 -- # timing_enter spdkcli_check_match 00:27:44.955 12:13:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:44.955 12:13:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:44.955 12:13:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@69 -- # check_match 00:27:44.955 12:13:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@44 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdkcli.py ll /nvmf 00:27:45.215 12:13:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/app/match/match /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test.match 00:27:45.215 12:13:32 spdkcli_nvmf_tcp -- spdkcli/common.sh@46 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_nvmf.test 00:27:45.215 12:13:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@70 -- # timing_exit spdkcli_check_match 00:27:45.215 12:13:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:45.215 12:13:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:45.474 12:13:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@72 -- # timing_enter spdkcli_clear_nvmf_config 00:27:45.474 12:13:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:45.474 12:13:32 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:45.474 12:13:32 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@87 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_job.py ''\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1'\'' '\''Malloc3'\'' 00:27:45.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all'\'' '\''Malloc4'\'' 00:27:45.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:45.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all'\'' '\''nqn.2014-08.org.spdk:cnode1'\'' 00:27:45.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262'\'' '\''127.0.0.1:4262'\'' 00:27:45.474 '\''/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all'\'' '\''127.0.0.1:4261'\'' 00:27:45.474 '\''/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3'\'' '\''nqn.2014-08.org.spdk:cnode3'\'' 00:27:45.474 '\''/nvmf/subsystem delete_all'\'' '\''nqn.2014-08.org.spdk:cnode2'\'' 00:27:45.474 '\''/bdevs/malloc delete Malloc6'\'' '\''Malloc6'\'' 00:27:45.474 '\''/bdevs/malloc delete Malloc5'\'' '\''Malloc5'\'' 00:27:45.474 '\''/bdevs/malloc delete Malloc4'\'' '\''Malloc4'\'' 00:27:45.474 '\''/bdevs/malloc delete Malloc3'\'' '\''Malloc3'\'' 00:27:45.474 '\''/bdevs/malloc delete Malloc2'\'' '\''Malloc2'\'' 00:27:45.474 '\''/bdevs/malloc delete Malloc1'\'' '\''Malloc1'\'' 00:27:45.474 ' 00:27:50.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete nsid=1', 'Malloc3', False] 00:27:50.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/namespaces delete_all', 'Malloc4', False] 00:27:50.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/hosts delete nqn.2014-08.org.spdk:cnode2', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:50.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode3/hosts delete_all', 'nqn.2014-08.org.spdk:cnode1', False] 00:27:50.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete tcp 127.0.0.1 4262', '127.0.0.1:4262', False] 00:27:50.753 Executing command: ['/nvmf/subsystem/nqn.2014-08.org.spdk:cnode1/listen_addresses delete_all', '127.0.0.1:4261', False] 00:27:50.753 Executing command: ['/nvmf/subsystem delete nqn.2014-08.org.spdk:cnode3', 'nqn.2014-08.org.spdk:cnode3', False] 00:27:50.753 Executing command: ['/nvmf/subsystem delete_all', 'nqn.2014-08.org.spdk:cnode2', False] 00:27:50.753 Executing command: ['/bdevs/malloc delete Malloc6', 'Malloc6', False] 00:27:50.753 Executing command: ['/bdevs/malloc delete Malloc5', 'Malloc5', False] 00:27:50.753 Executing command: ['/bdevs/malloc delete Malloc4', 'Malloc4', False] 00:27:50.753 Executing command: ['/bdevs/malloc delete Malloc3', 'Malloc3', False] 00:27:50.753 Executing command: ['/bdevs/malloc delete Malloc2', 'Malloc2', False] 00:27:50.753 Executing command: ['/bdevs/malloc delete Malloc1', 'Malloc1', False] 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@88 -- # timing_exit spdkcli_clear_nvmf_config 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@728 -- # xtrace_disable 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@90 -- # killprocess 487320 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 487320 ']' 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 487320 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # uname 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 487320 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@966 -- # echo 'killing process with pid 487320' 00:27:50.753 killing process with pid 487320 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@967 -- # kill 487320 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@972 -- # wait 487320 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- spdkcli/nvmf.sh@1 -- # cleanup 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@10 -- # '[' -n '' ']' 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@13 -- # '[' -n 487320 ']' 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@14 -- # killprocess 487320 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@948 -- # '[' -z 487320 ']' 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@952 -- # kill -0 487320 00:27:50.753 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (487320) - No such process 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@975 -- # echo 'Process with pid 487320 is not found' 00:27:50.753 Process with pid 487320 is not found 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@16 -- # '[' -n '' ']' 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@19 -- # '[' -n '' ']' 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- spdkcli/common.sh@22 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/spdkcli_nvmf.test /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/spdkcli/match_files/spdkcli_details_vhost.test /tmp/sample_aio 00:27:50.753 00:27:50.753 real 0m15.814s 00:27:50.753 user 0m32.777s 00:27:50.753 sys 0m0.713s 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@1124 -- # xtrace_disable 00:27:50.753 12:13:37 spdkcli_nvmf_tcp -- common/autotest_common.sh@10 -- # set +x 00:27:50.753 ************************************ 00:27:50.753 END TEST spdkcli_nvmf_tcp 00:27:50.753 ************************************ 00:27:50.753 12:13:37 -- common/autotest_common.sh@1142 -- # return 0 00:27:50.753 12:13:37 -- spdk/autotest.sh@290 -- # run_test nvmf_identify_passthru /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:50.753 12:13:37 -- common/autotest_common.sh@1099 -- # '[' 3 -le 1 ']' 00:27:50.753 12:13:37 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:27:50.753 12:13:37 -- common/autotest_common.sh@10 -- # set +x 00:27:50.753 ************************************ 00:27:50.753 START TEST nvmf_identify_passthru 00:27:50.753 ************************************ 00:27:50.753 12:13:37 nvmf_identify_passthru -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/identify_passthru.sh --transport=tcp 00:27:50.753 * Looking for test storage... 00:27:50.753 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:27:50.753 12:13:37 nvmf_identify_passthru -- target/identify_passthru.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@7 -- # uname -s 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.753 12:13:37 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.753 12:13:37 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.753 12:13:37 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.753 12:13:37 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.753 12:13:37 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.753 12:13:37 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.753 12:13:37 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:50.753 12:13:37 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@47 -- # : 0 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:27:50.753 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@51 -- # have_pci_nics=0 00:27:50.753 12:13:37 nvmf_identify_passthru -- target/identify_passthru.sh@10 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:27:50.753 12:13:37 nvmf_identify_passthru -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:27:50.753 12:13:37 nvmf_identify_passthru -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:27:50.753 12:13:37 nvmf_identify_passthru -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:27:50.753 12:13:37 nvmf_identify_passthru -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.753 12:13:37 nvmf_identify_passthru -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.754 12:13:37 nvmf_identify_passthru -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.754 12:13:37 nvmf_identify_passthru -- paths/export.sh@5 -- # export PATH 00:27:50.754 12:13:37 nvmf_identify_passthru -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:27:50.754 12:13:37 nvmf_identify_passthru -- target/identify_passthru.sh@12 -- # nvmftestinit 00:27:50.754 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:27:50.754 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:27:50.754 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@448 -- # prepare_net_devs 00:27:50.754 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@410 -- # local -g is_hw=no 00:27:50.754 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@412 -- # remove_spdk_ns 00:27:50.754 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:27:50.754 12:13:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:27:50.754 12:13:37 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:27:50.754 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:27:50.754 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:27:50.754 12:13:37 nvmf_identify_passthru -- nvmf/common.sh@285 -- # xtrace_disable 00:27:50.754 12:13:37 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@291 -- # pci_devs=() 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@291 -- # local -a pci_devs 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@292 -- # pci_net_devs=() 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@293 -- # pci_drivers=() 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@293 -- # local -A pci_drivers 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@295 -- # net_devs=() 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@295 -- # local -ga net_devs 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@296 -- # e810=() 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@296 -- # local -ga e810 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@297 -- # x722=() 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@297 -- # local -ga x722 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@298 -- # mlx=() 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@298 -- # local -ga mlx 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:27:56.031 Found 0000:86:00.0 (0x8086 - 0x159b) 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:27:56.031 Found 0000:86:00.1 (0x8086 - 0x159b) 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:56.031 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:27:56.032 Found net devices under 0000:86:00.0: cvl_0_0 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@390 -- # [[ up == up ]] 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:27:56.032 Found net devices under 0000:86:00.1: cvl_0_1 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@414 -- # is_hw=yes 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:27:56.032 12:13:42 nvmf_identify_passthru -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:27:56.032 12:13:43 nvmf_identify_passthru -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:27:56.032 12:13:43 nvmf_identify_passthru -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:27:56.032 12:13:43 nvmf_identify_passthru -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:27:56.032 12:13:43 nvmf_identify_passthru -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:27:56.032 12:13:43 nvmf_identify_passthru -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:27:56.032 12:13:43 nvmf_identify_passthru -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:27:56.032 12:13:43 nvmf_identify_passthru -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:27:56.032 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:27:56.032 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.181 ms 00:27:56.032 00:27:56.032 --- 10.0.0.2 ping statistics --- 00:27:56.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.032 rtt min/avg/max/mdev = 0.181/0.181/0.181/0.000 ms 00:27:56.032 12:13:43 nvmf_identify_passthru -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:27:56.032 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:27:56.032 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.398 ms 00:27:56.032 00:27:56.032 --- 10.0.0.1 ping statistics --- 00:27:56.032 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:27:56.032 rtt min/avg/max/mdev = 0.398/0.398/0.398/0.000 ms 00:27:56.032 12:13:43 nvmf_identify_passthru -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:27:56.032 12:13:43 nvmf_identify_passthru -- nvmf/common.sh@422 -- # return 0 00:27:56.032 12:13:43 nvmf_identify_passthru -- nvmf/common.sh@450 -- # '[' '' == iso ']' 00:27:56.032 12:13:43 nvmf_identify_passthru -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:27:56.032 12:13:43 nvmf_identify_passthru -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:27:56.032 12:13:43 nvmf_identify_passthru -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:27:56.032 12:13:43 nvmf_identify_passthru -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:27:56.032 12:13:43 nvmf_identify_passthru -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:27:56.032 12:13:43 nvmf_identify_passthru -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:27:56.032 12:13:43 nvmf_identify_passthru -- target/identify_passthru.sh@14 -- # timing_enter nvme_identify 00:27:56.032 12:13:43 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:27:56.032 12:13:43 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:27:56.032 12:13:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # get_first_nvme_bdf 00:27:56.032 12:13:43 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # bdfs=() 00:27:56.032 12:13:43 nvmf_identify_passthru -- common/autotest_common.sh@1524 -- # local bdfs 00:27:56.032 12:13:43 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # bdfs=($(get_nvme_bdfs)) 00:27:56.032 12:13:43 nvmf_identify_passthru -- common/autotest_common.sh@1525 -- # get_nvme_bdfs 00:27:56.032 12:13:43 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # bdfs=() 00:27:56.032 12:13:43 nvmf_identify_passthru -- common/autotest_common.sh@1513 -- # local bdfs 00:27:56.032 12:13:43 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:27:56.032 12:13:43 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/gen_nvme.sh 00:27:56.032 12:13:43 nvmf_identify_passthru -- common/autotest_common.sh@1514 -- # jq -r '.config[].params.traddr' 00:27:56.032 12:13:43 nvmf_identify_passthru -- common/autotest_common.sh@1515 -- # (( 1 == 0 )) 00:27:56.032 12:13:43 nvmf_identify_passthru -- common/autotest_common.sh@1519 -- # printf '%s\n' 0000:5e:00.0 00:27:56.032 12:13:43 nvmf_identify_passthru -- common/autotest_common.sh@1527 -- # echo 0000:5e:00.0 00:27:56.032 12:13:43 nvmf_identify_passthru -- target/identify_passthru.sh@16 -- # bdf=0000:5e:00.0 00:27:56.032 12:13:43 nvmf_identify_passthru -- target/identify_passthru.sh@17 -- # '[' -z 0000:5e:00.0 ']' 00:27:56.032 12:13:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:27:56.032 12:13:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # grep 'Serial Number:' 00:27:56.032 12:13:43 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # awk '{print $3}' 00:27:56.293 EAL: No free 2048 kB hugepages reported on node 1 00:28:00.491 12:13:47 nvmf_identify_passthru -- target/identify_passthru.sh@23 -- # nvme_serial_number=BTLJ72430F0E1P0FGN 00:28:00.491 12:13:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:5e:00.0' -i 0 00:28:00.491 12:13:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # grep 'Model Number:' 00:28:00.491 12:13:47 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # awk '{print $3}' 00:28:00.491 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.688 12:13:51 nvmf_identify_passthru -- target/identify_passthru.sh@24 -- # nvme_model_number=INTEL 00:28:04.688 12:13:51 nvmf_identify_passthru -- target/identify_passthru.sh@26 -- # timing_exit nvme_identify 00:28:04.688 12:13:51 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:04.688 12:13:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:04.688 12:13:51 nvmf_identify_passthru -- target/identify_passthru.sh@28 -- # timing_enter start_nvmf_tgt 00:28:04.688 12:13:51 nvmf_identify_passthru -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:04.688 12:13:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:04.688 12:13:51 nvmf_identify_passthru -- target/identify_passthru.sh@31 -- # nvmfpid=494187 00:28:04.688 12:13:51 nvmf_identify_passthru -- target/identify_passthru.sh@30 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xF --wait-for-rpc 00:28:04.688 12:13:51 nvmf_identify_passthru -- target/identify_passthru.sh@33 -- # trap 'process_shm --id $NVMF_APP_SHM_ID; nvmftestfini; exit 1' SIGINT SIGTERM EXIT 00:28:04.688 12:13:51 nvmf_identify_passthru -- target/identify_passthru.sh@35 -- # waitforlisten 494187 00:28:04.688 12:13:51 nvmf_identify_passthru -- common/autotest_common.sh@829 -- # '[' -z 494187 ']' 00:28:04.688 12:13:51 nvmf_identify_passthru -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:04.688 12:13:51 nvmf_identify_passthru -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:04.688 12:13:51 nvmf_identify_passthru -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:04.688 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:04.688 12:13:51 nvmf_identify_passthru -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:04.688 12:13:51 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:04.688 [2024-07-25 12:13:51.633793] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:28:04.688 [2024-07-25 12:13:51.633840] [ DPDK EAL parameters: nvmf -c 0xF --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:04.688 EAL: No free 2048 kB hugepages reported on node 1 00:28:04.688 [2024-07-25 12:13:51.692896] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:28:04.688 [2024-07-25 12:13:51.773348] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:04.688 [2024-07-25 12:13:51.773385] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:04.688 [2024-07-25 12:13:51.773392] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:04.688 [2024-07-25 12:13:51.773400] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:04.688 [2024-07-25 12:13:51.773405] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:04.688 [2024-07-25 12:13:51.773445] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:28:04.688 [2024-07-25 12:13:51.773539] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:28:04.688 [2024-07-25 12:13:51.773625] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:28:04.688 [2024-07-25 12:13:51.773626] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:05.258 12:13:52 nvmf_identify_passthru -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:05.258 12:13:52 nvmf_identify_passthru -- common/autotest_common.sh@862 -- # return 0 00:28:05.258 12:13:52 nvmf_identify_passthru -- target/identify_passthru.sh@36 -- # rpc_cmd -v nvmf_set_config --passthru-identify-ctrlr 00:28:05.258 12:13:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.258 12:13:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:05.258 INFO: Log level set to 20 00:28:05.258 INFO: Requests: 00:28:05.258 { 00:28:05.258 "jsonrpc": "2.0", 00:28:05.258 "method": "nvmf_set_config", 00:28:05.258 "id": 1, 00:28:05.258 "params": { 00:28:05.258 "admin_cmd_passthru": { 00:28:05.258 "identify_ctrlr": true 00:28:05.258 } 00:28:05.258 } 00:28:05.258 } 00:28:05.258 00:28:05.258 INFO: response: 00:28:05.258 { 00:28:05.258 "jsonrpc": "2.0", 00:28:05.258 "id": 1, 00:28:05.258 "result": true 00:28:05.258 } 00:28:05.258 00:28:05.258 12:13:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.258 12:13:52 nvmf_identify_passthru -- target/identify_passthru.sh@37 -- # rpc_cmd -v framework_start_init 00:28:05.258 12:13:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.258 12:13:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:05.258 INFO: Setting log level to 20 00:28:05.258 INFO: Setting log level to 20 00:28:05.258 INFO: Log level set to 20 00:28:05.258 INFO: Log level set to 20 00:28:05.258 INFO: Requests: 00:28:05.258 { 00:28:05.258 "jsonrpc": "2.0", 00:28:05.258 "method": "framework_start_init", 00:28:05.258 "id": 1 00:28:05.258 } 00:28:05.258 00:28:05.258 INFO: Requests: 00:28:05.258 { 00:28:05.258 "jsonrpc": "2.0", 00:28:05.258 "method": "framework_start_init", 00:28:05.258 "id": 1 00:28:05.258 } 00:28:05.258 00:28:05.518 [2024-07-25 12:13:52.526906] nvmf_tgt.c: 451:nvmf_tgt_advance_state: *NOTICE*: Custom identify ctrlr handler enabled 00:28:05.518 INFO: response: 00:28:05.518 { 00:28:05.518 "jsonrpc": "2.0", 00:28:05.518 "id": 1, 00:28:05.518 "result": true 00:28:05.518 } 00:28:05.518 00:28:05.518 INFO: response: 00:28:05.518 { 00:28:05.518 "jsonrpc": "2.0", 00:28:05.518 "id": 1, 00:28:05.518 "result": true 00:28:05.518 } 00:28:05.518 00:28:05.518 12:13:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.518 12:13:52 nvmf_identify_passthru -- target/identify_passthru.sh@38 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:28:05.518 12:13:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.518 12:13:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:05.518 INFO: Setting log level to 40 00:28:05.518 INFO: Setting log level to 40 00:28:05.518 INFO: Setting log level to 40 00:28:05.518 [2024-07-25 12:13:52.540262] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:05.518 12:13:52 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:05.518 12:13:52 nvmf_identify_passthru -- target/identify_passthru.sh@39 -- # timing_exit start_nvmf_tgt 00:28:05.518 12:13:52 nvmf_identify_passthru -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:05.518 12:13:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:05.518 12:13:52 nvmf_identify_passthru -- target/identify_passthru.sh@41 -- # rpc_cmd bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:5e:00.0 00:28:05.518 12:13:52 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:05.518 12:13:52 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:08.810 Nvme0n1 00:28:08.810 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.810 12:13:55 nvmf_identify_passthru -- target/identify_passthru.sh@42 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 -a -s SPDK00000000000001 -m 1 00:28:08.810 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.810 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:08.810 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.810 12:13:55 nvmf_identify_passthru -- target/identify_passthru.sh@43 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 Nvme0n1 00:28:08.810 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.810 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:08.810 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.810 12:13:55 nvmf_identify_passthru -- target/identify_passthru.sh@44 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:08.810 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.810 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:08.810 [2024-07-25 12:13:55.432182] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:08.810 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.810 12:13:55 nvmf_identify_passthru -- target/identify_passthru.sh@46 -- # rpc_cmd nvmf_get_subsystems 00:28:08.810 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.810 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:08.810 [ 00:28:08.810 { 00:28:08.810 "nqn": "nqn.2014-08.org.nvmexpress.discovery", 00:28:08.810 "subtype": "Discovery", 00:28:08.810 "listen_addresses": [], 00:28:08.810 "allow_any_host": true, 00:28:08.810 "hosts": [] 00:28:08.810 }, 00:28:08.810 { 00:28:08.810 "nqn": "nqn.2016-06.io.spdk:cnode1", 00:28:08.810 "subtype": "NVMe", 00:28:08.810 "listen_addresses": [ 00:28:08.810 { 00:28:08.810 "trtype": "TCP", 00:28:08.810 "adrfam": "IPv4", 00:28:08.810 "traddr": "10.0.0.2", 00:28:08.810 "trsvcid": "4420" 00:28:08.810 } 00:28:08.810 ], 00:28:08.810 "allow_any_host": true, 00:28:08.810 "hosts": [], 00:28:08.810 "serial_number": "SPDK00000000000001", 00:28:08.810 "model_number": "SPDK bdev Controller", 00:28:08.810 "max_namespaces": 1, 00:28:08.810 "min_cntlid": 1, 00:28:08.810 "max_cntlid": 65519, 00:28:08.810 "namespaces": [ 00:28:08.810 { 00:28:08.810 "nsid": 1, 00:28:08.810 "bdev_name": "Nvme0n1", 00:28:08.810 "name": "Nvme0n1", 00:28:08.810 "nguid": "A852FF96CDF74B1CA6E69822738CEA3D", 00:28:08.810 "uuid": "a852ff96-cdf7-4b1c-a6e6-9822738cea3d" 00:28:08.810 } 00:28:08.810 ] 00:28:08.810 } 00:28:08.810 ] 00:28:08.810 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.810 12:13:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:08.810 12:13:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # grep 'Serial Number:' 00:28:08.810 12:13:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # awk '{print $3}' 00:28:08.810 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.810 12:13:55 nvmf_identify_passthru -- target/identify_passthru.sh@54 -- # nvmf_serial_number=BTLJ72430F0E1P0FGN 00:28:08.810 12:13:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_nvme_identify -r ' trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:cnode1' 00:28:08.810 12:13:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # grep 'Model Number:' 00:28:08.810 12:13:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # awk '{print $3}' 00:28:08.810 EAL: No free 2048 kB hugepages reported on node 1 00:28:08.810 12:13:55 nvmf_identify_passthru -- target/identify_passthru.sh@61 -- # nvmf_model_number=INTEL 00:28:08.810 12:13:55 nvmf_identify_passthru -- target/identify_passthru.sh@63 -- # '[' BTLJ72430F0E1P0FGN '!=' BTLJ72430F0E1P0FGN ']' 00:28:08.810 12:13:55 nvmf_identify_passthru -- target/identify_passthru.sh@68 -- # '[' INTEL '!=' INTEL ']' 00:28:08.810 12:13:55 nvmf_identify_passthru -- target/identify_passthru.sh@73 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:08.810 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:08.810 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:08.810 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:08.810 12:13:55 nvmf_identify_passthru -- target/identify_passthru.sh@75 -- # trap - SIGINT SIGTERM EXIT 00:28:08.810 12:13:55 nvmf_identify_passthru -- target/identify_passthru.sh@77 -- # nvmftestfini 00:28:08.810 12:13:55 nvmf_identify_passthru -- nvmf/common.sh@488 -- # nvmfcleanup 00:28:08.810 12:13:55 nvmf_identify_passthru -- nvmf/common.sh@117 -- # sync 00:28:08.810 12:13:55 nvmf_identify_passthru -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:28:08.810 12:13:55 nvmf_identify_passthru -- nvmf/common.sh@120 -- # set +e 00:28:08.810 12:13:55 nvmf_identify_passthru -- nvmf/common.sh@121 -- # for i in {1..20} 00:28:08.810 12:13:55 nvmf_identify_passthru -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:28:08.810 rmmod nvme_tcp 00:28:08.810 rmmod nvme_fabrics 00:28:08.810 rmmod nvme_keyring 00:28:08.810 12:13:55 nvmf_identify_passthru -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:28:08.810 12:13:55 nvmf_identify_passthru -- nvmf/common.sh@124 -- # set -e 00:28:08.810 12:13:55 nvmf_identify_passthru -- nvmf/common.sh@125 -- # return 0 00:28:08.810 12:13:55 nvmf_identify_passthru -- nvmf/common.sh@489 -- # '[' -n 494187 ']' 00:28:08.811 12:13:55 nvmf_identify_passthru -- nvmf/common.sh@490 -- # killprocess 494187 00:28:08.811 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@948 -- # '[' -z 494187 ']' 00:28:08.811 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@952 -- # kill -0 494187 00:28:08.811 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # uname 00:28:08.811 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:28:08.811 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 494187 00:28:08.811 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:28:08.811 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:28:08.811 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@966 -- # echo 'killing process with pid 494187' 00:28:08.811 killing process with pid 494187 00:28:08.811 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@967 -- # kill 494187 00:28:08.811 12:13:55 nvmf_identify_passthru -- common/autotest_common.sh@972 -- # wait 494187 00:28:10.192 12:13:57 nvmf_identify_passthru -- nvmf/common.sh@492 -- # '[' '' == iso ']' 00:28:10.192 12:13:57 nvmf_identify_passthru -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:28:10.192 12:13:57 nvmf_identify_passthru -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:28:10.192 12:13:57 nvmf_identify_passthru -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:28:10.192 12:13:57 nvmf_identify_passthru -- nvmf/common.sh@278 -- # remove_spdk_ns 00:28:10.192 12:13:57 nvmf_identify_passthru -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:10.192 12:13:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:10.192 12:13:57 nvmf_identify_passthru -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.732 12:13:59 nvmf_identify_passthru -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:28:12.732 00:28:12.732 real 0m21.716s 00:28:12.732 user 0m29.906s 00:28:12.732 sys 0m4.830s 00:28:12.732 12:13:59 nvmf_identify_passthru -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:12.732 12:13:59 nvmf_identify_passthru -- common/autotest_common.sh@10 -- # set +x 00:28:12.732 ************************************ 00:28:12.732 END TEST nvmf_identify_passthru 00:28:12.732 ************************************ 00:28:12.732 12:13:59 -- common/autotest_common.sh@1142 -- # return 0 00:28:12.732 12:13:59 -- spdk/autotest.sh@292 -- # run_test nvmf_dif /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:12.732 12:13:59 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:12.732 12:13:59 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:12.732 12:13:59 -- common/autotest_common.sh@10 -- # set +x 00:28:12.732 ************************************ 00:28:12.732 START TEST nvmf_dif 00:28:12.732 ************************************ 00:28:12.732 12:13:59 nvmf_dif -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/dif.sh 00:28:12.732 * Looking for test storage... 00:28:12.732 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:28:12.732 12:13:59 nvmf_dif -- target/dif.sh@13 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:28:12.732 12:13:59 nvmf_dif -- nvmf/common.sh@7 -- # uname -s 00:28:12.732 12:13:59 nvmf_dif -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:28:12.732 12:13:59 nvmf_dif -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:28:12.732 12:13:59 nvmf_dif -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:28:12.732 12:13:59 nvmf_dif -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:28:12.732 12:13:59 nvmf_dif -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:28:12.732 12:13:59 nvmf_dif -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:28:12.732 12:13:59 nvmf_dif -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:28:12.732 12:13:59 nvmf_dif -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:28:12.732 12:13:59 nvmf_dif -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:28:12.732 12:13:59 nvmf_dif -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:28:12.732 12:13:59 nvmf_dif -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:28:12.732 12:13:59 nvmf_dif -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:28:12.732 12:13:59 nvmf_dif -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:28:12.732 12:13:59 nvmf_dif -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:28:12.732 12:13:59 nvmf_dif -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:28:12.732 12:13:59 nvmf_dif -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:28:12.732 12:13:59 nvmf_dif -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:28:12.733 12:13:59 nvmf_dif -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:28:12.733 12:13:59 nvmf_dif -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:28:12.733 12:13:59 nvmf_dif -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:28:12.733 12:13:59 nvmf_dif -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.733 12:13:59 nvmf_dif -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.733 12:13:59 nvmf_dif -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.733 12:13:59 nvmf_dif -- paths/export.sh@5 -- # export PATH 00:28:12.733 12:13:59 nvmf_dif -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:28:12.733 12:13:59 nvmf_dif -- nvmf/common.sh@47 -- # : 0 00:28:12.733 12:13:59 nvmf_dif -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:28:12.733 12:13:59 nvmf_dif -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:28:12.733 12:13:59 nvmf_dif -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:28:12.733 12:13:59 nvmf_dif -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:28:12.733 12:13:59 nvmf_dif -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:28:12.733 12:13:59 nvmf_dif -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:28:12.733 12:13:59 nvmf_dif -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:28:12.733 12:13:59 nvmf_dif -- nvmf/common.sh@51 -- # have_pci_nics=0 00:28:12.733 12:13:59 nvmf_dif -- target/dif.sh@15 -- # NULL_META=16 00:28:12.733 12:13:59 nvmf_dif -- target/dif.sh@15 -- # NULL_BLOCK_SIZE=512 00:28:12.733 12:13:59 nvmf_dif -- target/dif.sh@15 -- # NULL_SIZE=64 00:28:12.733 12:13:59 nvmf_dif -- target/dif.sh@15 -- # NULL_DIF=1 00:28:12.733 12:13:59 nvmf_dif -- target/dif.sh@135 -- # nvmftestinit 00:28:12.733 12:13:59 nvmf_dif -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:28:12.733 12:13:59 nvmf_dif -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:28:12.733 12:13:59 nvmf_dif -- nvmf/common.sh@448 -- # prepare_net_devs 00:28:12.733 12:13:59 nvmf_dif -- nvmf/common.sh@410 -- # local -g is_hw=no 00:28:12.733 12:13:59 nvmf_dif -- nvmf/common.sh@412 -- # remove_spdk_ns 00:28:12.733 12:13:59 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:28:12.733 12:13:59 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:28:12.733 12:13:59 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:28:12.733 12:13:59 nvmf_dif -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:28:12.733 12:13:59 nvmf_dif -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:28:12.733 12:13:59 nvmf_dif -- nvmf/common.sh@285 -- # xtrace_disable 00:28:12.733 12:13:59 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@291 -- # pci_devs=() 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@291 -- # local -a pci_devs 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@292 -- # pci_net_devs=() 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@293 -- # pci_drivers=() 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@293 -- # local -A pci_drivers 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@295 -- # net_devs=() 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@295 -- # local -ga net_devs 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@296 -- # e810=() 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@296 -- # local -ga e810 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@297 -- # x722=() 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@297 -- # local -ga x722 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@298 -- # mlx=() 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@298 -- # local -ga mlx 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:28:16.953 12:14:04 nvmf_dif -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:28:16.954 Found 0000:86:00.0 (0x8086 - 0x159b) 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:28:16.954 Found 0000:86:00.1 (0x8086 - 0x159b) 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:28:16.954 Found net devices under 0000:86:00.0: cvl_0_0 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@390 -- # [[ up == up ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:28:16.954 Found net devices under 0000:86:00.1: cvl_0_1 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@414 -- # is_hw=yes 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:28:16.954 12:14:04 nvmf_dif -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:28:17.214 12:14:04 nvmf_dif -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:28:17.214 12:14:04 nvmf_dif -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:28:17.214 12:14:04 nvmf_dif -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:28:17.214 12:14:04 nvmf_dif -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:28:17.214 12:14:04 nvmf_dif -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:28:17.214 12:14:04 nvmf_dif -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:28:17.214 12:14:04 nvmf_dif -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:28:17.214 12:14:04 nvmf_dif -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:28:17.214 12:14:04 nvmf_dif -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:28:17.214 12:14:04 nvmf_dif -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:28:17.214 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:28:17.214 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.179 ms 00:28:17.214 00:28:17.214 --- 10.0.0.2 ping statistics --- 00:28:17.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.214 rtt min/avg/max/mdev = 0.179/0.179/0.179/0.000 ms 00:28:17.214 12:14:04 nvmf_dif -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:28:17.214 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:28:17.214 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.400 ms 00:28:17.214 00:28:17.214 --- 10.0.0.1 ping statistics --- 00:28:17.214 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:28:17.214 rtt min/avg/max/mdev = 0.400/0.400/0.400/0.000 ms 00:28:17.214 12:14:04 nvmf_dif -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:28:17.214 12:14:04 nvmf_dif -- nvmf/common.sh@422 -- # return 0 00:28:17.214 12:14:04 nvmf_dif -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:28:17.214 12:14:04 nvmf_dif -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:28:19.754 0000:00:04.7 (8086 2021): Already using the vfio-pci driver 00:28:19.754 0000:5e:00.0 (8086 0a54): Already using the vfio-pci driver 00:28:19.754 0000:00:04.6 (8086 2021): Already using the vfio-pci driver 00:28:19.754 0000:00:04.5 (8086 2021): Already using the vfio-pci driver 00:28:19.754 0000:00:04.4 (8086 2021): Already using the vfio-pci driver 00:28:19.754 0000:00:04.3 (8086 2021): Already using the vfio-pci driver 00:28:19.754 0000:00:04.2 (8086 2021): Already using the vfio-pci driver 00:28:19.754 0000:00:04.1 (8086 2021): Already using the vfio-pci driver 00:28:19.754 0000:00:04.0 (8086 2021): Already using the vfio-pci driver 00:28:19.754 0000:80:04.7 (8086 2021): Already using the vfio-pci driver 00:28:19.754 0000:80:04.6 (8086 2021): Already using the vfio-pci driver 00:28:19.754 0000:80:04.5 (8086 2021): Already using the vfio-pci driver 00:28:19.754 0000:80:04.4 (8086 2021): Already using the vfio-pci driver 00:28:19.754 0000:80:04.3 (8086 2021): Already using the vfio-pci driver 00:28:19.754 0000:80:04.2 (8086 2021): Already using the vfio-pci driver 00:28:19.754 0000:80:04.1 (8086 2021): Already using the vfio-pci driver 00:28:19.754 0000:80:04.0 (8086 2021): Already using the vfio-pci driver 00:28:19.754 12:14:07 nvmf_dif -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:28:19.754 12:14:07 nvmf_dif -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:28:19.754 12:14:07 nvmf_dif -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:28:19.754 12:14:07 nvmf_dif -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:28:19.754 12:14:07 nvmf_dif -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:28:19.754 12:14:07 nvmf_dif -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:28:20.014 12:14:07 nvmf_dif -- target/dif.sh@136 -- # NVMF_TRANSPORT_OPTS+=' --dif-insert-or-strip' 00:28:20.014 12:14:07 nvmf_dif -- target/dif.sh@137 -- # nvmfappstart 00:28:20.014 12:14:07 nvmf_dif -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:28:20.014 12:14:07 nvmf_dif -- common/autotest_common.sh@722 -- # xtrace_disable 00:28:20.014 12:14:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:20.014 12:14:07 nvmf_dif -- nvmf/common.sh@481 -- # nvmfpid=499645 00:28:20.014 12:14:07 nvmf_dif -- nvmf/common.sh@482 -- # waitforlisten 499645 00:28:20.014 12:14:07 nvmf_dif -- common/autotest_common.sh@829 -- # '[' -z 499645 ']' 00:28:20.014 12:14:07 nvmf_dif -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:28:20.014 12:14:07 nvmf_dif -- common/autotest_common.sh@834 -- # local max_retries=100 00:28:20.014 12:14:07 nvmf_dif -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:28:20.014 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:28:20.014 12:14:07 nvmf_dif -- common/autotest_common.sh@838 -- # xtrace_disable 00:28:20.014 12:14:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:20.014 12:14:07 nvmf_dif -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF 00:28:20.014 [2024-07-25 12:14:07.067074] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:28:20.014 [2024-07-25 12:14:07.067118] [ DPDK EAL parameters: nvmf -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:28:20.014 EAL: No free 2048 kB hugepages reported on node 1 00:28:20.014 [2024-07-25 12:14:07.124425] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:20.014 [2024-07-25 12:14:07.206108] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:28:20.014 [2024-07-25 12:14:07.206142] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:28:20.014 [2024-07-25 12:14:07.206149] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:28:20.014 [2024-07-25 12:14:07.206155] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:28:20.014 [2024-07-25 12:14:07.206160] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:28:20.014 [2024-07-25 12:14:07.206181] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:28:20.953 12:14:07 nvmf_dif -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:28:20.953 12:14:07 nvmf_dif -- common/autotest_common.sh@862 -- # return 0 00:28:20.953 12:14:07 nvmf_dif -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:28:20.953 12:14:07 nvmf_dif -- common/autotest_common.sh@728 -- # xtrace_disable 00:28:20.953 12:14:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:20.953 12:14:07 nvmf_dif -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:28:20.953 12:14:07 nvmf_dif -- target/dif.sh@139 -- # create_transport 00:28:20.953 12:14:07 nvmf_dif -- target/dif.sh@50 -- # rpc_cmd nvmf_create_transport -t tcp -o --dif-insert-or-strip 00:28:20.953 12:14:07 nvmf_dif -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.953 12:14:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:20.953 [2024-07-25 12:14:07.902466] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:28:20.953 12:14:07 nvmf_dif -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.953 12:14:07 nvmf_dif -- target/dif.sh@141 -- # run_test fio_dif_1_default fio_dif_1 00:28:20.953 12:14:07 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:20.953 12:14:07 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:20.953 12:14:07 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:20.953 ************************************ 00:28:20.953 START TEST fio_dif_1_default 00:28:20.953 ************************************ 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1123 -- # fio_dif_1 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@86 -- # create_subsystems 0 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@28 -- # local sub 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@30 -- # for sub in "$@" 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@31 -- # create_subsystem 0 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@18 -- # local sub_id=0 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:20.953 bdev_null0 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:20.953 [2024-07-25 12:14:07.970747] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # fio /dev/fd/62 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@87 -- # create_json_sub_conf 0 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # config=() 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@532 -- # local subsystem config 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:20.953 { 00:28:20.953 "params": { 00:28:20.953 "name": "Nvme$subsystem", 00:28:20.953 "trtype": "$TEST_TRANSPORT", 00:28:20.953 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:20.953 "adrfam": "ipv4", 00:28:20.953 "trsvcid": "$NVMF_PORT", 00:28:20.953 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:20.953 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:20.953 "hdgst": ${hdgst:-false}, 00:28:20.953 "ddgst": ${ddgst:-false} 00:28:20.953 }, 00:28:20.953 "method": "bdev_nvme_attach_controller" 00:28:20.953 } 00:28:20.953 EOF 00:28:20.953 )") 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1341 -- # shift 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@82 -- # gen_fio_conf 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@54 -- # local file 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@56 -- # cat 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@554 -- # cat 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libasan 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file = 1 )) 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- target/dif.sh@72 -- # (( file <= files )) 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@556 -- # jq . 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@557 -- # IFS=, 00:28:20.953 12:14:07 nvmf_dif.fio_dif_1_default -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:20.953 "params": { 00:28:20.953 "name": "Nvme0", 00:28:20.953 "trtype": "tcp", 00:28:20.953 "traddr": "10.0.0.2", 00:28:20.953 "adrfam": "ipv4", 00:28:20.953 "trsvcid": "4420", 00:28:20.953 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:20.953 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:20.953 "hdgst": false, 00:28:20.953 "ddgst": false 00:28:20.953 }, 00:28:20.953 "method": "bdev_nvme_attach_controller" 00:28:20.953 }' 00:28:20.953 12:14:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:20.953 12:14:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:20.953 12:14:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:20.953 12:14:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:20.953 12:14:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:20.953 12:14:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:20.953 12:14:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:20.953 12:14:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:20.953 12:14:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:20.953 12:14:08 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:21.212 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:21.212 fio-3.35 00:28:21.212 Starting 1 thread 00:28:21.212 EAL: No free 2048 kB hugepages reported on node 1 00:28:33.427 00:28:33.427 filename0: (groupid=0, jobs=1): err= 0: pid=500078: Thu Jul 25 12:14:18 2024 00:28:33.427 read: IOPS=94, BW=377KiB/s (386kB/s)(3776KiB/10020msec) 00:28:33.427 slat (nsec): min=5911, max=25807, avg=6287.51, stdev=1143.45 00:28:33.427 clat (usec): min=41783, max=44757, avg=42439.83, stdev=541.95 00:28:33.427 lat (usec): min=41790, max=44783, avg=42446.12, stdev=542.08 00:28:33.427 clat percentiles (usec): 00:28:33.427 | 1.00th=[41681], 5.00th=[41681], 10.00th=[42206], 20.00th=[42206], 00:28:33.427 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42730], 00:28:33.427 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:28:33.427 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44827], 99.95th=[44827], 00:28:33.427 | 99.99th=[44827] 00:28:33.427 bw ( KiB/s): min= 352, max= 384, per=99.78%, avg=376.00, stdev=14.22, samples=20 00:28:33.427 iops : min= 88, max= 96, avg=94.00, stdev= 3.55, samples=20 00:28:33.427 lat (msec) : 50=100.00% 00:28:33.427 cpu : usr=94.56%, sys=5.19%, ctx=16, majf=0, minf=212 00:28:33.427 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:33.427 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.427 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:33.427 issued rwts: total=944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:33.427 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:33.427 00:28:33.427 Run status group 0 (all jobs): 00:28:33.427 READ: bw=377KiB/s (386kB/s), 377KiB/s-377KiB/s (386kB/s-386kB/s), io=3776KiB (3867kB), run=10020-10020msec 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@88 -- # destroy_subsystems 0 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@43 -- # local sub 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@45 -- # for sub in "$@" 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@36 -- # local sub_id=0 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_default -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.427 00:28:33.427 real 0m11.160s 00:28:33.427 user 0m15.920s 00:28:33.427 sys 0m0.775s 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_default -- common/autotest_common.sh@10 -- # set +x 00:28:33.427 ************************************ 00:28:33.427 END TEST fio_dif_1_default 00:28:33.427 ************************************ 00:28:33.427 12:14:19 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:33.427 12:14:19 nvmf_dif -- target/dif.sh@142 -- # run_test fio_dif_1_multi_subsystems fio_dif_1_multi_subsystems 00:28:33.427 12:14:19 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:33.427 12:14:19 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:33.427 12:14:19 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:33.427 ************************************ 00:28:33.427 START TEST fio_dif_1_multi_subsystems 00:28:33.427 ************************************ 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1123 -- # fio_dif_1_multi_subsystems 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@92 -- # local files=1 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@94 -- # create_subsystems 0 1 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@28 -- # local sub 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 0 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=0 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:33.427 bdev_null0 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:33.427 [2024-07-25 12:14:19.191272] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@30 -- # for sub in "$@" 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@31 -- # create_subsystem 1 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@18 -- # local sub_id=1 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:33.427 bdev_null1 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.427 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # fio /dev/fd/62 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@95 -- # create_json_sub_conf 0 1 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # config=() 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@532 -- # local subsystem config 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:33.428 { 00:28:33.428 "params": { 00:28:33.428 "name": "Nvme$subsystem", 00:28:33.428 "trtype": "$TEST_TRANSPORT", 00:28:33.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.428 "adrfam": "ipv4", 00:28:33.428 "trsvcid": "$NVMF_PORT", 00:28:33.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.428 "hdgst": ${hdgst:-false}, 00:28:33.428 "ddgst": ${ddgst:-false} 00:28:33.428 }, 00:28:33.428 "method": "bdev_nvme_attach_controller" 00:28:33.428 } 00:28:33.428 EOF 00:28:33.428 )") 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1341 -- # shift 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@82 -- # gen_fio_conf 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@54 -- # local file 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@56 -- # cat 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libasan 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file = 1 )) 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@73 -- # cat 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:33.428 { 00:28:33.428 "params": { 00:28:33.428 "name": "Nvme$subsystem", 00:28:33.428 "trtype": "$TEST_TRANSPORT", 00:28:33.428 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:33.428 "adrfam": "ipv4", 00:28:33.428 "trsvcid": "$NVMF_PORT", 00:28:33.428 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:33.428 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:33.428 "hdgst": ${hdgst:-false}, 00:28:33.428 "ddgst": ${ddgst:-false} 00:28:33.428 }, 00:28:33.428 "method": "bdev_nvme_attach_controller" 00:28:33.428 } 00:28:33.428 EOF 00:28:33.428 )") 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@554 -- # cat 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file++ )) 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@72 -- # (( file <= files )) 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@556 -- # jq . 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@557 -- # IFS=, 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:33.428 "params": { 00:28:33.428 "name": "Nvme0", 00:28:33.428 "trtype": "tcp", 00:28:33.428 "traddr": "10.0.0.2", 00:28:33.428 "adrfam": "ipv4", 00:28:33.428 "trsvcid": "4420", 00:28:33.428 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:33.428 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:33.428 "hdgst": false, 00:28:33.428 "ddgst": false 00:28:33.428 }, 00:28:33.428 "method": "bdev_nvme_attach_controller" 00:28:33.428 },{ 00:28:33.428 "params": { 00:28:33.428 "name": "Nvme1", 00:28:33.428 "trtype": "tcp", 00:28:33.428 "traddr": "10.0.0.2", 00:28:33.428 "adrfam": "ipv4", 00:28:33.428 "trsvcid": "4420", 00:28:33.428 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:33.428 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:33.428 "hdgst": false, 00:28:33.428 "ddgst": false 00:28:33.428 }, 00:28:33.428 "method": "bdev_nvme_attach_controller" 00:28:33.428 }' 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:33.428 12:14:19 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:33.428 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:33.428 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=4 00:28:33.428 fio-3.35 00:28:33.428 Starting 2 threads 00:28:33.428 EAL: No free 2048 kB hugepages reported on node 1 00:28:43.414 00:28:43.414 filename0: (groupid=0, jobs=1): err= 0: pid=502081: Thu Jul 25 12:14:30 2024 00:28:43.414 read: IOPS=180, BW=722KiB/s (739kB/s)(7232KiB/10021msec) 00:28:43.414 slat (nsec): min=4254, max=26272, avg=7065.83, stdev=1942.65 00:28:43.414 clat (usec): min=796, max=44545, avg=22148.97, stdev=20228.19 00:28:43.414 lat (usec): min=802, max=44559, avg=22156.04, stdev=20227.58 00:28:43.414 clat percentiles (usec): 00:28:43.414 | 1.00th=[ 816], 5.00th=[ 1713], 10.00th=[ 1827], 20.00th=[ 1844], 00:28:43.414 | 30.00th=[ 1876], 40.00th=[ 1942], 50.00th=[41157], 60.00th=[42206], 00:28:43.414 | 70.00th=[42206], 80.00th=[42206], 90.00th=[42730], 95.00th=[42730], 00:28:43.414 | 99.00th=[43254], 99.50th=[43254], 99.90th=[44303], 99.95th=[44303], 00:28:43.414 | 99.99th=[44303] 00:28:43.414 bw ( KiB/s): min= 672, max= 768, per=65.64%, avg=721.60, stdev=31.96, samples=20 00:28:43.414 iops : min= 168, max= 192, avg=180.40, stdev= 7.99, samples=20 00:28:43.414 lat (usec) : 1000=1.99% 00:28:43.414 lat (msec) : 2=40.04%, 4=7.74%, 50=50.22% 00:28:43.414 cpu : usr=97.17%, sys=2.58%, ctx=14, majf=0, minf=152 00:28:43.414 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:43.414 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.414 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.414 issued rwts: total=1808,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.414 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:43.414 filename1: (groupid=0, jobs=1): err= 0: pid=502082: Thu Jul 25 12:14:30 2024 00:28:43.414 read: IOPS=94, BW=377KiB/s (386kB/s)(3776KiB/10010msec) 00:28:43.414 slat (nsec): min=5969, max=25709, avg=7674.22, stdev=2459.54 00:28:43.414 clat (usec): min=41320, max=44103, avg=42392.43, stdev=523.23 00:28:43.414 lat (usec): min=41326, max=44128, avg=42400.10, stdev=523.34 00:28:43.414 clat percentiles (usec): 00:28:43.414 | 1.00th=[41681], 5.00th=[41681], 10.00th=[41681], 20.00th=[42206], 00:28:43.414 | 30.00th=[42206], 40.00th=[42206], 50.00th=[42206], 60.00th=[42730], 00:28:43.414 | 70.00th=[42730], 80.00th=[42730], 90.00th=[43254], 95.00th=[43254], 00:28:43.415 | 99.00th=[43779], 99.50th=[43779], 99.90th=[44303], 99.95th=[44303], 00:28:43.415 | 99.99th=[44303] 00:28:43.415 bw ( KiB/s): min= 352, max= 384, per=34.23%, avg=376.00, stdev=14.22, samples=20 00:28:43.415 iops : min= 88, max= 96, avg=94.00, stdev= 3.55, samples=20 00:28:43.415 lat (msec) : 50=100.00% 00:28:43.415 cpu : usr=97.63%, sys=2.12%, ctx=12, majf=0, minf=101 00:28:43.415 IO depths : 1=25.0%, 2=50.0%, 4=25.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:43.415 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.415 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:43.415 issued rwts: total=944,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:43.415 latency : target=0, window=0, percentile=100.00%, depth=4 00:28:43.415 00:28:43.415 Run status group 0 (all jobs): 00:28:43.415 READ: bw=1098KiB/s (1125kB/s), 377KiB/s-722KiB/s (386kB/s-739kB/s), io=10.8MiB (11.3MB), run=10010-10021msec 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@96 -- # destroy_subsystems 0 1 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@43 -- # local sub 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=0 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@45 -- # for sub in "$@" 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@46 -- # destroy_subsystem 1 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@36 -- # local sub_id=1 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.415 00:28:43.415 real 0m11.491s 00:28:43.415 user 0m26.353s 00:28:43.415 sys 0m0.757s 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@1124 -- # xtrace_disable 00:28:43.415 12:14:30 nvmf_dif.fio_dif_1_multi_subsystems -- common/autotest_common.sh@10 -- # set +x 00:28:43.415 ************************************ 00:28:43.415 END TEST fio_dif_1_multi_subsystems 00:28:43.415 ************************************ 00:28:43.675 12:14:30 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:28:43.675 12:14:30 nvmf_dif -- target/dif.sh@143 -- # run_test fio_dif_rand_params fio_dif_rand_params 00:28:43.675 12:14:30 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:28:43.675 12:14:30 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:28:43.675 12:14:30 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:28:43.675 ************************************ 00:28:43.675 START TEST fio_dif_rand_params 00:28:43.675 ************************************ 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1123 -- # fio_dif_rand_params 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@100 -- # local NULL_DIF 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@101 -- # local bs numjobs runtime iodepth files 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # NULL_DIF=3 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # bs=128k 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # numjobs=3 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # iodepth=3 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@103 -- # runtime=5 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@105 -- # create_subsystems 0 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.675 bdev_null0 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:43.675 [2024-07-25 12:14:30.750439] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # fio /dev/fd/62 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@106 -- # create_json_sub_conf 0 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:43.675 { 00:28:43.675 "params": { 00:28:43.675 "name": "Nvme$subsystem", 00:28:43.675 "trtype": "$TEST_TRANSPORT", 00:28:43.675 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:43.675 "adrfam": "ipv4", 00:28:43.675 "trsvcid": "$NVMF_PORT", 00:28:43.675 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:43.675 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:43.675 "hdgst": ${hdgst:-false}, 00:28:43.675 "ddgst": ${ddgst:-false} 00:28:43.675 }, 00:28:43.675 "method": "bdev_nvme_attach_controller" 00:28:43.675 } 00:28:43.675 EOF 00:28:43.675 )") 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:43.675 "params": { 00:28:43.675 "name": "Nvme0", 00:28:43.675 "trtype": "tcp", 00:28:43.675 "traddr": "10.0.0.2", 00:28:43.675 "adrfam": "ipv4", 00:28:43.675 "trsvcid": "4420", 00:28:43.675 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:43.675 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:43.675 "hdgst": false, 00:28:43.675 "ddgst": false 00:28:43.675 }, 00:28:43.675 "method": "bdev_nvme_attach_controller" 00:28:43.675 }' 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:43.675 12:14:30 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:43.934 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:28:43.935 ... 00:28:43.935 fio-3.35 00:28:43.935 Starting 3 threads 00:28:43.935 EAL: No free 2048 kB hugepages reported on node 1 00:28:50.502 00:28:50.502 filename0: (groupid=0, jobs=1): err= 0: pid=503954: Thu Jul 25 12:14:36 2024 00:28:50.502 read: IOPS=241, BW=30.1MiB/s (31.6MB/s)(151MiB/5008msec) 00:28:50.502 slat (nsec): min=6165, max=34876, avg=9772.40, stdev=3867.64 00:28:50.502 clat (usec): min=5338, max=92603, avg=12431.65, stdev=10548.35 00:28:50.502 lat (usec): min=5346, max=92611, avg=12441.43, stdev=10548.48 00:28:50.502 clat percentiles (usec): 00:28:50.502 | 1.00th=[ 5538], 5.00th=[ 6783], 10.00th=[ 7832], 20.00th=[ 8717], 00:28:50.502 | 30.00th=[ 9241], 40.00th=[ 9765], 50.00th=[10028], 60.00th=[10421], 00:28:50.502 | 70.00th=[10814], 80.00th=[11338], 90.00th=[12387], 95.00th=[49546], 00:28:50.502 | 99.00th=[53740], 99.50th=[55313], 99.90th=[92799], 99.95th=[92799], 00:28:50.502 | 99.99th=[92799] 00:28:50.502 bw ( KiB/s): min=23808, max=42240, per=41.13%, avg=30822.40, stdev=5968.70, samples=10 00:28:50.502 iops : min= 186, max= 330, avg=240.80, stdev=46.63, samples=10 00:28:50.502 lat (msec) : 10=47.80%, 20=46.15%, 50=1.24%, 100=4.81% 00:28:50.502 cpu : usr=95.69%, sys=3.26%, ctx=380, majf=0, minf=129 00:28:50.502 IO depths : 1=0.5%, 2=99.5%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:50.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.502 issued rwts: total=1207,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.502 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:50.502 filename0: (groupid=0, jobs=1): err= 0: pid=503955: Thu Jul 25 12:14:36 2024 00:28:50.502 read: IOPS=163, BW=20.5MiB/s (21.4MB/s)(102MiB/5005msec) 00:28:50.502 slat (usec): min=6, max=109, avg= 9.83, stdev= 5.89 00:28:50.502 clat (usec): min=5861, max=60940, avg=18317.26, stdev=17617.10 00:28:50.502 lat (usec): min=5868, max=60952, avg=18327.08, stdev=17617.31 00:28:50.502 clat percentiles (usec): 00:28:50.502 | 1.00th=[ 6259], 5.00th=[ 6652], 10.00th=[ 6980], 20.00th=[ 7570], 00:28:50.502 | 30.00th=[ 8225], 40.00th=[ 9241], 50.00th=[ 9896], 60.00th=[11469], 00:28:50.502 | 70.00th=[13960], 80.00th=[17957], 90.00th=[54264], 95.00th=[56886], 00:28:50.502 | 99.00th=[60031], 99.50th=[60031], 99.90th=[61080], 99.95th=[61080], 00:28:50.502 | 99.99th=[61080] 00:28:50.502 bw ( KiB/s): min=11520, max=29952, per=27.88%, avg=20893.90, stdev=5745.43, samples=10 00:28:50.502 iops : min= 90, max= 234, avg=163.20, stdev=44.88, samples=10 00:28:50.502 lat (msec) : 10=50.31%, 20=30.40%, 50=2.81%, 100=16.48% 00:28:50.502 cpu : usr=95.70%, sys=3.36%, ctx=16, majf=0, minf=134 00:28:50.502 IO depths : 1=3.7%, 2=96.3%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:50.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.502 issued rwts: total=819,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.502 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:50.502 filename0: (groupid=0, jobs=1): err= 0: pid=503956: Thu Jul 25 12:14:36 2024 00:28:50.502 read: IOPS=181, BW=22.6MiB/s (23.7MB/s)(113MiB/5002msec) 00:28:50.502 slat (nsec): min=6236, max=31007, avg=9287.82, stdev=3731.95 00:28:50.502 clat (usec): min=5537, max=61064, avg=16547.11, stdev=16217.30 00:28:50.502 lat (usec): min=5543, max=61071, avg=16556.40, stdev=16217.44 00:28:50.502 clat percentiles (usec): 00:28:50.502 | 1.00th=[ 6128], 5.00th=[ 6587], 10.00th=[ 6980], 20.00th=[ 7635], 00:28:50.502 | 30.00th=[ 8029], 40.00th=[ 8717], 50.00th=[ 9634], 60.00th=[10552], 00:28:50.502 | 70.00th=[12649], 80.00th=[16581], 90.00th=[51643], 95.00th=[56886], 00:28:50.502 | 99.00th=[60031], 99.50th=[61080], 99.90th=[61080], 99.95th=[61080], 00:28:50.502 | 99.99th=[61080] 00:28:50.502 bw ( KiB/s): min=15616, max=34560, per=31.09%, avg=23296.00, stdev=6873.97, samples=9 00:28:50.502 iops : min= 122, max= 270, avg=182.00, stdev=53.70, samples=9 00:28:50.502 lat (msec) : 10=54.42%, 20=30.02%, 50=2.76%, 100=12.80% 00:28:50.502 cpu : usr=95.94%, sys=3.18%, ctx=7, majf=0, minf=98 00:28:50.502 IO depths : 1=4.3%, 2=95.7%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:28:50.502 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.502 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:28:50.502 issued rwts: total=906,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:28:50.502 latency : target=0, window=0, percentile=100.00%, depth=3 00:28:50.502 00:28:50.502 Run status group 0 (all jobs): 00:28:50.502 READ: bw=73.2MiB/s (76.7MB/s), 20.5MiB/s-30.1MiB/s (21.4MB/s-31.6MB/s), io=367MiB (384MB), run=5002-5008msec 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@107 -- # destroy_subsystems 0 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # NULL_DIF=2 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # bs=4k 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # numjobs=8 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # iodepth=16 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # runtime= 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@109 -- # files=2 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@111 -- # create_subsystems 0 1 2 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 2 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.502 bdev_null0 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.502 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.503 [2024-07-25 12:14:36.966236] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 2 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.503 bdev_null1 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.503 12:14:36 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 2 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=2 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null2 64 512 --md-size 16 --dif-type 2 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.503 bdev_null2 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode2 --serial-number 53313233-2 --allow-any-host 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode2 bdev_null2 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode2 -t tcp -a 10.0.0.2 -s 4420 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # fio /dev/fd/62 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@112 -- # create_json_sub_conf 0 1 2 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 2 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:50.503 { 00:28:50.503 "params": { 00:28:50.503 "name": "Nvme$subsystem", 00:28:50.503 "trtype": "$TEST_TRANSPORT", 00:28:50.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.503 "adrfam": "ipv4", 00:28:50.503 "trsvcid": "$NVMF_PORT", 00:28:50.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.503 "hdgst": ${hdgst:-false}, 00:28:50.503 "ddgst": ${ddgst:-false} 00:28:50.503 }, 00:28:50.503 "method": "bdev_nvme_attach_controller" 00:28:50.503 } 00:28:50.503 EOF 00:28:50.503 )") 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:50.503 { 00:28:50.503 "params": { 00:28:50.503 "name": "Nvme$subsystem", 00:28:50.503 "trtype": "$TEST_TRANSPORT", 00:28:50.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.503 "adrfam": "ipv4", 00:28:50.503 "trsvcid": "$NVMF_PORT", 00:28:50.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.503 "hdgst": ${hdgst:-false}, 00:28:50.503 "ddgst": ${ddgst:-false} 00:28:50.503 }, 00:28:50.503 "method": "bdev_nvme_attach_controller" 00:28:50.503 } 00:28:50.503 EOF 00:28:50.503 )") 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:28:50.503 { 00:28:50.503 "params": { 00:28:50.503 "name": "Nvme$subsystem", 00:28:50.503 "trtype": "$TEST_TRANSPORT", 00:28:50.503 "traddr": "$NVMF_FIRST_TARGET_IP", 00:28:50.503 "adrfam": "ipv4", 00:28:50.503 "trsvcid": "$NVMF_PORT", 00:28:50.503 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:28:50.503 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:28:50.503 "hdgst": ${hdgst:-false}, 00:28:50.503 "ddgst": ${ddgst:-false} 00:28:50.503 }, 00:28:50.503 "method": "bdev_nvme_attach_controller" 00:28:50.503 } 00:28:50.503 EOF 00:28:50.503 )") 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:28:50.503 12:14:37 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:28:50.503 "params": { 00:28:50.503 "name": "Nvme0", 00:28:50.503 "trtype": "tcp", 00:28:50.503 "traddr": "10.0.0.2", 00:28:50.503 "adrfam": "ipv4", 00:28:50.503 "trsvcid": "4420", 00:28:50.503 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:28:50.503 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:28:50.503 "hdgst": false, 00:28:50.503 "ddgst": false 00:28:50.503 }, 00:28:50.503 "method": "bdev_nvme_attach_controller" 00:28:50.503 },{ 00:28:50.503 "params": { 00:28:50.503 "name": "Nvme1", 00:28:50.503 "trtype": "tcp", 00:28:50.503 "traddr": "10.0.0.2", 00:28:50.503 "adrfam": "ipv4", 00:28:50.503 "trsvcid": "4420", 00:28:50.503 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:28:50.503 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:28:50.503 "hdgst": false, 00:28:50.503 "ddgst": false 00:28:50.503 }, 00:28:50.504 "method": "bdev_nvme_attach_controller" 00:28:50.504 },{ 00:28:50.504 "params": { 00:28:50.504 "name": "Nvme2", 00:28:50.504 "trtype": "tcp", 00:28:50.504 "traddr": "10.0.0.2", 00:28:50.504 "adrfam": "ipv4", 00:28:50.504 "trsvcid": "4420", 00:28:50.504 "subnqn": "nqn.2016-06.io.spdk:cnode2", 00:28:50.504 "hostnqn": "nqn.2016-06.io.spdk:host2", 00:28:50.504 "hdgst": false, 00:28:50.504 "ddgst": false 00:28:50.504 }, 00:28:50.504 "method": "bdev_nvme_attach_controller" 00:28:50.504 }' 00:28:50.504 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:50.504 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:50.504 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:28:50.504 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:28:50.504 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:28:50.504 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:28:50.504 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:28:50.504 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:28:50.504 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:28:50.504 12:14:37 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:28:50.504 filename0: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:50.504 ... 00:28:50.504 filename1: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:50.504 ... 00:28:50.504 filename2: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=16 00:28:50.504 ... 00:28:50.504 fio-3.35 00:28:50.504 Starting 24 threads 00:28:50.504 EAL: No free 2048 kB hugepages reported on node 1 00:29:02.784 00:29:02.784 filename0: (groupid=0, jobs=1): err= 0: pid=505212: Thu Jul 25 12:14:48 2024 00:29:02.784 read: IOPS=578, BW=2314KiB/s (2370kB/s)(22.6MiB/10017msec) 00:29:02.784 slat (nsec): min=6923, max=40316, avg=14370.42, stdev=5041.36 00:29:02.784 clat (usec): min=12087, max=51078, avg=27570.14, stdev=5211.15 00:29:02.784 lat (usec): min=12096, max=51096, avg=27584.51, stdev=5211.15 00:29:02.784 clat percentiles (usec): 00:29:02.784 | 1.00th=[15270], 5.00th=[20841], 10.00th=[23462], 20.00th=[24249], 00:29:02.784 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26346], 00:29:02.784 | 70.00th=[30278], 80.00th=[32375], 90.00th=[34341], 95.00th=[36439], 00:29:02.784 | 99.00th=[43779], 99.50th=[47449], 99.90th=[50070], 99.95th=[51119], 00:29:02.784 | 99.99th=[51119] 00:29:02.784 bw ( KiB/s): min= 2096, max= 2600, per=4.25%, avg=2311.60, stdev=118.81, samples=20 00:29:02.784 iops : min= 524, max= 650, avg=577.90, stdev=29.70, samples=20 00:29:02.784 lat (msec) : 20=4.78%, 50=95.15%, 100=0.07% 00:29:02.784 cpu : usr=98.36%, sys=1.23%, ctx=16, majf=0, minf=75 00:29:02.784 IO depths : 1=0.7%, 2=1.4%, 4=7.9%, 8=77.3%, 16=12.7%, 32=0.0%, >=64=0.0% 00:29:02.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.784 complete : 0=0.0%, 4=90.2%, 8=4.9%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.784 issued rwts: total=5795,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.784 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.784 filename0: (groupid=0, jobs=1): err= 0: pid=505213: Thu Jul 25 12:14:48 2024 00:29:02.784 read: IOPS=590, BW=2362KiB/s (2418kB/s)(23.1MiB/10007msec) 00:29:02.784 slat (nsec): min=6260, max=36385, avg=13831.99, stdev=4980.97 00:29:02.784 clat (usec): min=8680, max=49254, avg=27021.37, stdev=4884.80 00:29:02.784 lat (usec): min=8688, max=49264, avg=27035.20, stdev=4884.93 00:29:02.784 clat percentiles (usec): 00:29:02.784 | 1.00th=[14353], 5.00th=[20055], 10.00th=[23462], 20.00th=[24249], 00:29:02.784 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:29:02.784 | 70.00th=[27132], 80.00th=[31589], 90.00th=[33817], 95.00th=[35914], 00:29:02.784 | 99.00th=[41157], 99.50th=[42206], 99.90th=[47449], 99.95th=[49021], 00:29:02.784 | 99.99th=[49021] 00:29:02.784 bw ( KiB/s): min= 2176, max= 2480, per=4.33%, avg=2359.58, stdev=75.87, samples=19 00:29:02.784 iops : min= 544, max= 620, avg=589.89, stdev=18.97, samples=19 00:29:02.784 lat (msec) : 10=0.12%, 20=4.96%, 50=94.92% 00:29:02.784 cpu : usr=98.39%, sys=1.19%, ctx=14, majf=0, minf=53 00:29:02.784 IO depths : 1=0.6%, 2=1.2%, 4=7.9%, 8=77.8%, 16=12.4%, 32=0.0%, >=64=0.0% 00:29:02.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.784 complete : 0=0.0%, 4=89.7%, 8=5.2%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.784 issued rwts: total=5908,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.784 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.784 filename0: (groupid=0, jobs=1): err= 0: pid=505214: Thu Jul 25 12:14:48 2024 00:29:02.784 read: IOPS=540, BW=2163KiB/s (2215kB/s)(21.1MiB/10005msec) 00:29:02.784 slat (nsec): min=6839, max=66938, avg=10316.62, stdev=5593.53 00:29:02.784 clat (usec): min=5978, max=58238, avg=29529.21, stdev=5789.68 00:29:02.784 lat (usec): min=5986, max=58285, avg=29539.53, stdev=5790.22 00:29:02.784 clat percentiles (usec): 00:29:02.784 | 1.00th=[17433], 5.00th=[21103], 10.00th=[23462], 20.00th=[24773], 00:29:02.784 | 30.00th=[25822], 40.00th=[27132], 50.00th=[29492], 60.00th=[31065], 00:29:02.784 | 70.00th=[32375], 80.00th=[33817], 90.00th=[36963], 95.00th=[39584], 00:29:02.784 | 99.00th=[45351], 99.50th=[46400], 99.90th=[51643], 99.95th=[57934], 00:29:02.784 | 99.99th=[58459] 00:29:02.784 bw ( KiB/s): min= 1968, max= 2464, per=3.97%, avg=2160.60, stdev=130.99, samples=20 00:29:02.784 iops : min= 492, max= 616, avg=540.15, stdev=32.75, samples=20 00:29:02.784 lat (msec) : 10=0.30%, 20=2.92%, 50=96.56%, 100=0.22% 00:29:02.784 cpu : usr=98.32%, sys=1.26%, ctx=13, majf=0, minf=89 00:29:02.784 IO depths : 1=0.1%, 2=0.5%, 4=6.1%, 8=78.2%, 16=15.1%, 32=0.0%, >=64=0.0% 00:29:02.784 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.784 complete : 0=0.0%, 4=90.2%, 8=6.5%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.784 issued rwts: total=5411,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.784 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.784 filename0: (groupid=0, jobs=1): err= 0: pid=505215: Thu Jul 25 12:14:48 2024 00:29:02.784 read: IOPS=527, BW=2108KiB/s (2159kB/s)(20.6MiB/10015msec) 00:29:02.784 slat (nsec): min=7026, max=84899, avg=12532.09, stdev=4557.36 00:29:02.784 clat (usec): min=12994, max=51245, avg=30279.33, stdev=5494.70 00:29:02.784 lat (usec): min=13002, max=51262, avg=30291.86, stdev=5494.36 00:29:02.784 clat percentiles (usec): 00:29:02.784 | 1.00th=[17695], 5.00th=[23462], 10.00th=[24249], 20.00th=[25297], 00:29:02.785 | 30.00th=[25822], 40.00th=[28181], 50.00th=[31327], 60.00th=[32113], 00:29:02.785 | 70.00th=[33162], 80.00th=[34341], 90.00th=[36963], 95.00th=[39060], 00:29:02.785 | 99.00th=[45351], 99.50th=[47449], 99.90th=[50594], 99.95th=[51119], 00:29:02.785 | 99.99th=[51119] 00:29:02.785 bw ( KiB/s): min= 1888, max= 2352, per=3.86%, avg=2104.80, stdev=119.08, samples=20 00:29:02.785 iops : min= 472, max= 588, avg=526.20, stdev=29.77, samples=20 00:29:02.785 lat (msec) : 20=2.22%, 50=97.59%, 100=0.19% 00:29:02.785 cpu : usr=98.48%, sys=1.12%, ctx=16, majf=0, minf=53 00:29:02.785 IO depths : 1=1.0%, 2=2.1%, 4=11.7%, 8=72.8%, 16=12.4%, 32=0.0%, >=64=0.0% 00:29:02.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.785 complete : 0=0.0%, 4=91.2%, 8=3.8%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.785 issued rwts: total=5278,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.785 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.785 filename0: (groupid=0, jobs=1): err= 0: pid=505216: Thu Jul 25 12:14:48 2024 00:29:02.785 read: IOPS=587, BW=2351KiB/s (2408kB/s)(23.0MiB/10012msec) 00:29:02.785 slat (nsec): min=6870, max=39091, avg=12877.99, stdev=4711.39 00:29:02.785 clat (usec): min=12404, max=49583, avg=27125.44, stdev=4555.41 00:29:02.785 lat (usec): min=12412, max=49593, avg=27138.32, stdev=4555.20 00:29:02.785 clat percentiles (usec): 00:29:02.785 | 1.00th=[16712], 5.00th=[22152], 10.00th=[23462], 20.00th=[24249], 00:29:02.785 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25822], 60.00th=[26346], 00:29:02.785 | 70.00th=[27657], 80.00th=[31065], 90.00th=[33817], 95.00th=[35914], 00:29:02.785 | 99.00th=[41157], 99.50th=[44827], 99.90th=[46400], 99.95th=[46924], 00:29:02.785 | 99.99th=[49546] 00:29:02.785 bw ( KiB/s): min= 2200, max= 2488, per=4.32%, avg=2353.20, stdev=91.33, samples=20 00:29:02.785 iops : min= 550, max= 622, avg=588.30, stdev=22.83, samples=20 00:29:02.785 lat (msec) : 20=2.79%, 50=97.21% 00:29:02.785 cpu : usr=98.14%, sys=1.36%, ctx=12, majf=0, minf=54 00:29:02.785 IO depths : 1=0.3%, 2=0.9%, 4=7.5%, 8=78.0%, 16=13.2%, 32=0.0%, >=64=0.0% 00:29:02.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.785 complete : 0=0.0%, 4=89.9%, 8=5.4%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.785 issued rwts: total=5885,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.785 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.785 filename0: (groupid=0, jobs=1): err= 0: pid=505217: Thu Jul 25 12:14:48 2024 00:29:02.785 read: IOPS=607, BW=2429KiB/s (2487kB/s)(23.8MiB/10014msec) 00:29:02.785 slat (nsec): min=6946, max=39196, avg=12910.92, stdev=4559.32 00:29:02.785 clat (usec): min=10078, max=48641, avg=26267.88, stdev=3863.76 00:29:02.785 lat (usec): min=10092, max=48654, avg=26280.79, stdev=3863.74 00:29:02.785 clat percentiles (usec): 00:29:02.785 | 1.00th=[16909], 5.00th=[22676], 10.00th=[23725], 20.00th=[24249], 00:29:02.785 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25297], 60.00th=[25822], 00:29:02.785 | 70.00th=[26084], 80.00th=[27395], 90.00th=[31851], 95.00th=[34341], 00:29:02.785 | 99.00th=[39060], 99.50th=[42730], 99.90th=[45351], 99.95th=[48497], 00:29:02.785 | 99.99th=[48497] 00:29:02.785 bw ( KiB/s): min= 2256, max= 2640, per=4.46%, avg=2428.40, stdev=100.10, samples=20 00:29:02.785 iops : min= 564, max= 660, avg=607.10, stdev=25.03, samples=20 00:29:02.785 lat (msec) : 20=2.71%, 50=97.29% 00:29:02.785 cpu : usr=98.27%, sys=1.30%, ctx=15, majf=0, minf=70 00:29:02.785 IO depths : 1=0.3%, 2=0.7%, 4=6.7%, 8=78.7%, 16=13.6%, 32=0.0%, >=64=0.0% 00:29:02.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.785 complete : 0=0.0%, 4=89.8%, 8=5.7%, 16=4.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.785 issued rwts: total=6081,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.785 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.785 filename0: (groupid=0, jobs=1): err= 0: pid=505218: Thu Jul 25 12:14:48 2024 00:29:02.785 read: IOPS=567, BW=2270KiB/s (2325kB/s)(22.2MiB/10026msec) 00:29:02.785 slat (nsec): min=6909, max=38130, avg=11935.04, stdev=4376.52 00:29:02.785 clat (usec): min=8708, max=52099, avg=28090.82, stdev=5738.67 00:29:02.785 lat (usec): min=8726, max=52110, avg=28102.75, stdev=5738.84 00:29:02.785 clat percentiles (usec): 00:29:02.785 | 1.00th=[14353], 5.00th=[19792], 10.00th=[23200], 20.00th=[24249], 00:29:02.785 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[28705], 00:29:02.785 | 70.00th=[31327], 80.00th=[32637], 90.00th=[35390], 95.00th=[37487], 00:29:02.785 | 99.00th=[45876], 99.50th=[50070], 99.90th=[52167], 99.95th=[52167], 00:29:02.785 | 99.99th=[52167] 00:29:02.785 bw ( KiB/s): min= 1872, max= 2512, per=4.17%, avg=2269.60, stdev=167.37, samples=20 00:29:02.785 iops : min= 468, max= 628, avg=567.40, stdev=41.84, samples=20 00:29:02.785 lat (msec) : 10=0.12%, 20=4.94%, 50=94.43%, 100=0.51% 00:29:02.785 cpu : usr=98.37%, sys=1.17%, ctx=16, majf=0, minf=78 00:29:02.785 IO depths : 1=0.8%, 2=1.7%, 4=10.4%, 8=74.6%, 16=12.5%, 32=0.0%, >=64=0.0% 00:29:02.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.785 complete : 0=0.0%, 4=91.0%, 8=4.0%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.785 issued rwts: total=5690,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.785 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.785 filename0: (groupid=0, jobs=1): err= 0: pid=505219: Thu Jul 25 12:14:48 2024 00:29:02.785 read: IOPS=562, BW=2251KiB/s (2305kB/s)(22.0MiB/10006msec) 00:29:02.785 slat (nsec): min=6966, max=60775, avg=13605.20, stdev=4941.21 00:29:02.785 clat (usec): min=8856, max=52153, avg=28350.82, stdev=5643.31 00:29:02.785 lat (usec): min=8863, max=52168, avg=28364.43, stdev=5643.30 00:29:02.785 clat percentiles (usec): 00:29:02.785 | 1.00th=[16057], 5.00th=[21627], 10.00th=[23725], 20.00th=[24511], 00:29:02.785 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[28967], 00:29:02.785 | 70.00th=[31589], 80.00th=[33162], 90.00th=[35390], 95.00th=[37487], 00:29:02.785 | 99.00th=[46400], 99.50th=[49021], 99.90th=[51643], 99.95th=[52167], 00:29:02.785 | 99.99th=[52167] 00:29:02.785 bw ( KiB/s): min= 2064, max= 2432, per=4.11%, avg=2240.84, stdev=99.38, samples=19 00:29:02.785 iops : min= 516, max= 608, avg=560.21, stdev=24.85, samples=19 00:29:02.785 lat (msec) : 10=0.23%, 20=3.69%, 50=95.69%, 100=0.39% 00:29:02.785 cpu : usr=98.27%, sys=1.31%, ctx=14, majf=0, minf=72 00:29:02.785 IO depths : 1=0.4%, 2=1.0%, 4=8.0%, 8=77.7%, 16=13.0%, 32=0.0%, >=64=0.0% 00:29:02.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.785 complete : 0=0.0%, 4=90.1%, 8=5.0%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.785 issued rwts: total=5632,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.785 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.785 filename1: (groupid=0, jobs=1): err= 0: pid=505220: Thu Jul 25 12:14:48 2024 00:29:02.785 read: IOPS=562, BW=2252KiB/s (2306kB/s)(22.0MiB/10018msec) 00:29:02.785 slat (nsec): min=6838, max=36695, avg=10645.50, stdev=3847.86 00:29:02.785 clat (usec): min=12362, max=45893, avg=28358.62, stdev=4937.95 00:29:02.785 lat (usec): min=12370, max=45903, avg=28369.26, stdev=4938.00 00:29:02.785 clat percentiles (usec): 00:29:02.785 | 1.00th=[17695], 5.00th=[21627], 10.00th=[23462], 20.00th=[24511], 00:29:02.785 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26608], 60.00th=[28967], 00:29:02.785 | 70.00th=[31589], 80.00th=[32900], 90.00th=[34866], 95.00th=[36963], 00:29:02.785 | 99.00th=[42206], 99.50th=[43779], 99.90th=[44827], 99.95th=[45876], 00:29:02.785 | 99.99th=[45876] 00:29:02.785 bw ( KiB/s): min= 2096, max= 2408, per=4.13%, avg=2249.20, stdev=83.80, samples=20 00:29:02.785 iops : min= 524, max= 602, avg=562.30, stdev=20.95, samples=20 00:29:02.785 lat (msec) : 20=2.71%, 50=97.29% 00:29:02.785 cpu : usr=98.49%, sys=1.07%, ctx=14, majf=0, minf=94 00:29:02.785 IO depths : 1=0.1%, 2=0.7%, 4=7.3%, 8=76.9%, 16=14.9%, 32=0.0%, >=64=0.0% 00:29:02.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.785 complete : 0=0.0%, 4=90.5%, 8=6.0%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.785 issued rwts: total=5639,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.785 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.785 filename1: (groupid=0, jobs=1): err= 0: pid=505221: Thu Jul 25 12:14:48 2024 00:29:02.785 read: IOPS=504, BW=2019KiB/s (2068kB/s)(19.7MiB/10003msec) 00:29:02.785 slat (nsec): min=10254, max=81393, avg=48858.88, stdev=18090.40 00:29:02.785 clat (usec): min=7082, max=57499, avg=31448.23, stdev=5974.18 00:29:02.785 lat (usec): min=7119, max=57553, avg=31497.09, stdev=5974.76 00:29:02.785 clat percentiles (usec): 00:29:02.785 | 1.00th=[17433], 5.00th=[23725], 10.00th=[24511], 20.00th=[25560], 00:29:02.785 | 30.00th=[29230], 40.00th=[30802], 50.00th=[31589], 60.00th=[32637], 00:29:02.785 | 70.00th=[33424], 80.00th=[34866], 90.00th=[37487], 95.00th=[41681], 00:29:02.785 | 99.00th=[50070], 99.50th=[51643], 99.90th=[53740], 99.95th=[56886], 00:29:02.785 | 99.99th=[57410] 00:29:02.785 bw ( KiB/s): min= 1856, max= 2176, per=3.67%, avg=1999.16, stdev=99.52, samples=19 00:29:02.785 iops : min= 464, max= 544, avg=499.79, stdev=24.88, samples=19 00:29:02.785 lat (msec) : 10=0.20%, 20=1.50%, 50=96.85%, 100=1.45% 00:29:02.785 cpu : usr=98.74%, sys=0.83%, ctx=12, majf=0, minf=48 00:29:02.785 IO depths : 1=0.2%, 2=0.5%, 4=7.7%, 8=78.0%, 16=13.5%, 32=0.0%, >=64=0.0% 00:29:02.785 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.785 complete : 0=0.0%, 4=89.9%, 8=5.3%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.785 issued rwts: total=5050,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.785 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.785 filename1: (groupid=0, jobs=1): err= 0: pid=505222: Thu Jul 25 12:14:48 2024 00:29:02.785 read: IOPS=595, BW=2381KiB/s (2438kB/s)(23.3MiB/10012msec) 00:29:02.785 slat (nsec): min=6931, max=40170, avg=12822.30, stdev=4624.09 00:29:02.785 clat (usec): min=12424, max=45586, avg=26801.00, stdev=4741.44 00:29:02.785 lat (usec): min=12440, max=45602, avg=26813.82, stdev=4742.03 00:29:02.785 clat percentiles (usec): 00:29:02.785 | 1.00th=[14746], 5.00th=[19792], 10.00th=[22938], 20.00th=[23987], 00:29:02.785 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:29:02.785 | 70.00th=[26870], 80.00th=[31065], 90.00th=[33817], 95.00th=[35390], 00:29:02.785 | 99.00th=[39060], 99.50th=[41157], 99.90th=[44827], 99.95th=[45351], 00:29:02.785 | 99.99th=[45351] 00:29:02.785 bw ( KiB/s): min= 2224, max= 2512, per=4.37%, avg=2377.20, stdev=72.18, samples=20 00:29:02.786 iops : min= 556, max= 628, avg=594.30, stdev=18.04, samples=20 00:29:02.786 lat (msec) : 20=5.39%, 50=94.61% 00:29:02.786 cpu : usr=98.49%, sys=1.10%, ctx=18, majf=0, minf=63 00:29:02.786 IO depths : 1=0.8%, 2=1.6%, 4=8.8%, 8=76.5%, 16=12.3%, 32=0.0%, >=64=0.0% 00:29:02.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.786 complete : 0=0.0%, 4=90.0%, 8=4.9%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.786 issued rwts: total=5959,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.786 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.786 filename1: (groupid=0, jobs=1): err= 0: pid=505223: Thu Jul 25 12:14:48 2024 00:29:02.786 read: IOPS=571, BW=2288KiB/s (2343kB/s)(22.4MiB/10015msec) 00:29:02.786 slat (nsec): min=6946, max=35829, avg=12779.66, stdev=4357.80 00:29:02.786 clat (usec): min=10410, max=49352, avg=27879.48, stdev=5250.24 00:29:02.786 lat (usec): min=10425, max=49367, avg=27892.26, stdev=5249.93 00:29:02.786 clat percentiles (usec): 00:29:02.786 | 1.00th=[14877], 5.00th=[20317], 10.00th=[23200], 20.00th=[24249], 00:29:02.786 | 30.00th=[24773], 40.00th=[25560], 50.00th=[26084], 60.00th=[27395], 00:29:02.786 | 70.00th=[31065], 80.00th=[32637], 90.00th=[35390], 95.00th=[36439], 00:29:02.786 | 99.00th=[41681], 99.50th=[42206], 99.90th=[45876], 99.95th=[49546], 00:29:02.786 | 99.99th=[49546] 00:29:02.786 bw ( KiB/s): min= 1968, max= 2432, per=4.20%, avg=2288.40, stdev=124.22, samples=20 00:29:02.786 iops : min= 492, max= 608, avg=572.10, stdev=31.05, samples=20 00:29:02.786 lat (msec) : 20=4.63%, 50=95.37% 00:29:02.786 cpu : usr=98.42%, sys=1.17%, ctx=19, majf=0, minf=82 00:29:02.786 IO depths : 1=1.0%, 2=2.0%, 4=9.1%, 8=75.6%, 16=12.3%, 32=0.0%, >=64=0.0% 00:29:02.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.786 complete : 0=0.0%, 4=90.4%, 8=4.6%, 16=5.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.786 issued rwts: total=5728,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.786 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.786 filename1: (groupid=0, jobs=1): err= 0: pid=505224: Thu Jul 25 12:14:48 2024 00:29:02.786 read: IOPS=540, BW=2163KiB/s (2215kB/s)(21.2MiB/10021msec) 00:29:02.786 slat (nsec): min=6911, max=47270, avg=13224.52, stdev=4737.43 00:29:02.786 clat (usec): min=13706, max=51483, avg=29509.93, stdev=5627.28 00:29:02.786 lat (usec): min=13719, max=51492, avg=29523.16, stdev=5627.12 00:29:02.786 clat percentiles (usec): 00:29:02.786 | 1.00th=[16909], 5.00th=[23200], 10.00th=[23987], 20.00th=[25035], 00:29:02.786 | 30.00th=[25560], 40.00th=[26084], 50.00th=[29754], 60.00th=[31327], 00:29:02.786 | 70.00th=[32637], 80.00th=[33817], 90.00th=[36439], 95.00th=[38536], 00:29:02.786 | 99.00th=[47973], 99.50th=[49021], 99.90th=[51119], 99.95th=[51643], 00:29:02.786 | 99.99th=[51643] 00:29:02.786 bw ( KiB/s): min= 2048, max= 2360, per=3.97%, avg=2160.80, stdev=84.42, samples=20 00:29:02.786 iops : min= 512, max= 590, avg=540.20, stdev=21.11, samples=20 00:29:02.786 lat (msec) : 20=2.84%, 50=96.81%, 100=0.35% 00:29:02.786 cpu : usr=98.39%, sys=1.21%, ctx=21, majf=0, minf=60 00:29:02.786 IO depths : 1=0.5%, 2=1.3%, 4=9.5%, 8=75.7%, 16=12.9%, 32=0.0%, >=64=0.0% 00:29:02.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.786 complete : 0=0.0%, 4=90.6%, 8=4.5%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.786 issued rwts: total=5418,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.786 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.786 filename1: (groupid=0, jobs=1): err= 0: pid=505225: Thu Jul 25 12:14:48 2024 00:29:02.786 read: IOPS=562, BW=2249KiB/s (2303kB/s)(22.0MiB/10008msec) 00:29:02.786 slat (nsec): min=6880, max=33024, avg=10682.39, stdev=3960.08 00:29:02.786 clat (usec): min=10482, max=53005, avg=28392.54, stdev=5355.73 00:29:02.786 lat (usec): min=10496, max=53017, avg=28403.22, stdev=5355.82 00:29:02.786 clat percentiles (usec): 00:29:02.786 | 1.00th=[16712], 5.00th=[21103], 10.00th=[23200], 20.00th=[24511], 00:29:02.786 | 30.00th=[25297], 40.00th=[25822], 50.00th=[26870], 60.00th=[28967], 00:29:02.786 | 70.00th=[31327], 80.00th=[32900], 90.00th=[35390], 95.00th=[37487], 00:29:02.786 | 99.00th=[42730], 99.50th=[44303], 99.90th=[53216], 99.95th=[53216], 00:29:02.786 | 99.99th=[53216] 00:29:02.786 bw ( KiB/s): min= 2048, max= 2392, per=4.13%, avg=2247.58, stdev=104.29, samples=19 00:29:02.786 iops : min= 512, max= 598, avg=561.89, stdev=26.07, samples=19 00:29:02.786 lat (msec) : 20=3.64%, 50=96.20%, 100=0.16% 00:29:02.786 cpu : usr=98.47%, sys=1.11%, ctx=13, majf=0, minf=76 00:29:02.786 IO depths : 1=0.1%, 2=0.5%, 4=6.4%, 8=78.1%, 16=15.0%, 32=0.0%, >=64=0.0% 00:29:02.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.786 complete : 0=0.0%, 4=90.2%, 8=6.4%, 16=3.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.786 issued rwts: total=5627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.786 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.786 filename1: (groupid=0, jobs=1): err= 0: pid=505226: Thu Jul 25 12:14:48 2024 00:29:02.786 read: IOPS=587, BW=2350KiB/s (2407kB/s)(23.0MiB/10006msec) 00:29:02.786 slat (nsec): min=6157, max=35120, avg=12041.06, stdev=4674.40 00:29:02.786 clat (usec): min=10397, max=50491, avg=27164.65, stdev=4556.81 00:29:02.786 lat (usec): min=10404, max=50508, avg=27176.69, stdev=4556.75 00:29:02.786 clat percentiles (usec): 00:29:02.786 | 1.00th=[17171], 5.00th=[22938], 10.00th=[23725], 20.00th=[24249], 00:29:02.786 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:29:02.786 | 70.00th=[27395], 80.00th=[31327], 90.00th=[33817], 95.00th=[35914], 00:29:02.786 | 99.00th=[40633], 99.50th=[42206], 99.90th=[46400], 99.95th=[46400], 00:29:02.786 | 99.99th=[50594] 00:29:02.786 bw ( KiB/s): min= 2144, max= 2512, per=4.29%, avg=2338.11, stdev=106.59, samples=19 00:29:02.786 iops : min= 536, max= 628, avg=584.53, stdev=26.65, samples=19 00:29:02.786 lat (msec) : 20=2.43%, 50=97.53%, 100=0.03% 00:29:02.786 cpu : usr=98.20%, sys=1.31%, ctx=14, majf=0, minf=62 00:29:02.786 IO depths : 1=0.2%, 2=0.8%, 4=7.3%, 8=77.9%, 16=13.8%, 32=0.0%, >=64=0.0% 00:29:02.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.786 complete : 0=0.0%, 4=90.0%, 8=5.7%, 16=4.3%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.786 issued rwts: total=5879,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.786 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.786 filename1: (groupid=0, jobs=1): err= 0: pid=505227: Thu Jul 25 12:14:48 2024 00:29:02.786 read: IOPS=547, BW=2190KiB/s (2242kB/s)(21.4MiB/10005msec) 00:29:02.786 slat (nsec): min=6580, max=31345, avg=9696.62, stdev=2990.65 00:29:02.786 clat (usec): min=6905, max=56864, avg=29171.68, stdev=5693.68 00:29:02.786 lat (usec): min=6913, max=56882, avg=29181.37, stdev=5693.71 00:29:02.786 clat percentiles (usec): 00:29:02.786 | 1.00th=[17433], 5.00th=[21103], 10.00th=[23200], 20.00th=[24511], 00:29:02.786 | 30.00th=[25822], 40.00th=[26870], 50.00th=[28181], 60.00th=[30540], 00:29:02.786 | 70.00th=[31851], 80.00th=[33424], 90.00th=[35914], 95.00th=[39060], 00:29:02.786 | 99.00th=[45876], 99.50th=[49021], 99.90th=[50070], 99.95th=[56886], 00:29:02.786 | 99.99th=[56886] 00:29:02.786 bw ( KiB/s): min= 2056, max= 2384, per=4.01%, avg=2182.32, stdev=92.09, samples=19 00:29:02.786 iops : min= 514, max= 596, avg=545.58, stdev=23.02, samples=19 00:29:02.786 lat (msec) : 10=0.04%, 20=3.20%, 50=96.62%, 100=0.15% 00:29:02.786 cpu : usr=98.70%, sys=0.87%, ctx=12, majf=0, minf=68 00:29:02.786 IO depths : 1=0.1%, 2=0.6%, 4=7.3%, 8=77.0%, 16=15.0%, 32=0.0%, >=64=0.0% 00:29:02.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.786 complete : 0=0.0%, 4=90.4%, 8=6.1%, 16=3.5%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.786 issued rwts: total=5477,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.786 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.786 filename2: (groupid=0, jobs=1): err= 0: pid=505228: Thu Jul 25 12:14:48 2024 00:29:02.786 read: IOPS=588, BW=2353KiB/s (2409kB/s)(23.0MiB/10007msec) 00:29:02.786 slat (nsec): min=6896, max=75042, avg=13078.71, stdev=4594.21 00:29:02.786 clat (usec): min=11123, max=43925, avg=27128.96, stdev=4823.38 00:29:02.786 lat (usec): min=11132, max=43950, avg=27142.04, stdev=4823.90 00:29:02.786 clat percentiles (usec): 00:29:02.786 | 1.00th=[14877], 5.00th=[20317], 10.00th=[23200], 20.00th=[24249], 00:29:02.786 | 30.00th=[24773], 40.00th=[25297], 50.00th=[25560], 60.00th=[26084], 00:29:02.786 | 70.00th=[28705], 80.00th=[31851], 90.00th=[33817], 95.00th=[35914], 00:29:02.786 | 99.00th=[40109], 99.50th=[40633], 99.90th=[43779], 99.95th=[43779], 00:29:02.786 | 99.99th=[43779] 00:29:02.786 bw ( KiB/s): min= 1920, max= 2496, per=4.31%, avg=2348.00, stdev=139.69, samples=20 00:29:02.786 iops : min= 480, max= 624, avg=587.00, stdev=34.92, samples=20 00:29:02.786 lat (msec) : 20=4.79%, 50=95.21% 00:29:02.786 cpu : usr=98.29%, sys=1.27%, ctx=17, majf=0, minf=73 00:29:02.786 IO depths : 1=0.6%, 2=1.3%, 4=9.3%, 8=76.3%, 16=12.5%, 32=0.0%, >=64=0.0% 00:29:02.786 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.786 complete : 0=0.0%, 4=90.3%, 8=4.6%, 16=5.1%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.786 issued rwts: total=5886,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.786 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.786 filename2: (groupid=0, jobs=1): err= 0: pid=505229: Thu Jul 25 12:14:48 2024 00:29:02.786 read: IOPS=586, BW=2347KiB/s (2404kB/s)(23.0MiB/10020msec) 00:29:02.786 slat (nsec): min=6854, max=51279, avg=12786.30, stdev=4746.77 00:29:02.786 clat (usec): min=13540, max=51356, avg=27183.77, stdev=4937.37 00:29:02.786 lat (usec): min=13555, max=51372, avg=27196.56, stdev=4937.29 00:29:02.786 clat percentiles (usec): 00:29:02.786 | 1.00th=[16319], 5.00th=[20841], 10.00th=[22938], 20.00th=[23987], 00:29:02.786 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25822], 60.00th=[26608], 00:29:02.786 | 70.00th=[28443], 80.00th=[31851], 90.00th=[33817], 95.00th=[35914], 00:29:02.786 | 99.00th=[42206], 99.50th=[43779], 99.90th=[49021], 99.95th=[49021], 00:29:02.786 | 99.99th=[51119] 00:29:02.787 bw ( KiB/s): min= 2176, max= 2528, per=4.31%, avg=2345.60, stdev=99.94, samples=20 00:29:02.787 iops : min= 544, max= 632, avg=586.40, stdev=24.99, samples=20 00:29:02.787 lat (msec) : 20=4.42%, 50=95.54%, 100=0.03% 00:29:02.787 cpu : usr=98.29%, sys=1.25%, ctx=15, majf=0, minf=75 00:29:02.787 IO depths : 1=0.4%, 2=1.2%, 4=8.0%, 8=77.4%, 16=12.9%, 32=0.0%, >=64=0.0% 00:29:02.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.787 complete : 0=0.0%, 4=90.1%, 8=5.0%, 16=4.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.787 issued rwts: total=5880,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.787 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.787 filename2: (groupid=0, jobs=1): err= 0: pid=505230: Thu Jul 25 12:14:48 2024 00:29:02.787 read: IOPS=532, BW=2129KiB/s (2180kB/s)(20.8MiB/10012msec) 00:29:02.787 slat (nsec): min=6863, max=36850, avg=12823.97, stdev=4517.97 00:29:02.787 clat (usec): min=12061, max=54833, avg=29983.38, stdev=6026.03 00:29:02.787 lat (usec): min=12072, max=54857, avg=29996.21, stdev=6025.92 00:29:02.787 clat percentiles (usec): 00:29:02.787 | 1.00th=[16319], 5.00th=[21627], 10.00th=[23987], 20.00th=[25035], 00:29:02.787 | 30.00th=[25822], 40.00th=[26870], 50.00th=[30278], 60.00th=[31589], 00:29:02.787 | 70.00th=[32637], 80.00th=[34341], 90.00th=[36963], 95.00th=[40109], 00:29:02.787 | 99.00th=[49021], 99.50th=[51119], 99.90th=[53216], 99.95th=[54789], 00:29:02.787 | 99.99th=[54789] 00:29:02.787 bw ( KiB/s): min= 1840, max= 2376, per=3.90%, avg=2125.05, stdev=153.45, samples=19 00:29:02.787 iops : min= 460, max= 594, avg=531.26, stdev=38.36, samples=19 00:29:02.787 lat (msec) : 20=3.42%, 50=95.78%, 100=0.81% 00:29:02.787 cpu : usr=98.56%, sys=1.04%, ctx=15, majf=0, minf=58 00:29:02.787 IO depths : 1=0.3%, 2=1.4%, 4=10.2%, 8=74.1%, 16=14.0%, 32=0.0%, >=64=0.0% 00:29:02.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.787 complete : 0=0.0%, 4=91.3%, 8=4.5%, 16=4.2%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.787 issued rwts: total=5328,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.787 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.787 filename2: (groupid=0, jobs=1): err= 0: pid=505231: Thu Jul 25 12:14:48 2024 00:29:02.787 read: IOPS=570, BW=2280KiB/s (2335kB/s)(22.3MiB/10014msec) 00:29:02.787 slat (nsec): min=7000, max=54828, avg=13490.35, stdev=4832.58 00:29:02.787 clat (usec): min=10721, max=51388, avg=27965.91, stdev=5496.95 00:29:02.787 lat (usec): min=10737, max=51403, avg=27979.40, stdev=5496.93 00:29:02.787 clat percentiles (usec): 00:29:02.787 | 1.00th=[14877], 5.00th=[19530], 10.00th=[23462], 20.00th=[24511], 00:29:02.787 | 30.00th=[25035], 40.00th=[25297], 50.00th=[26084], 60.00th=[27395], 00:29:02.787 | 70.00th=[31327], 80.00th=[32637], 90.00th=[34866], 95.00th=[36963], 00:29:02.787 | 99.00th=[44303], 99.50th=[47973], 99.90th=[50070], 99.95th=[51119], 00:29:02.787 | 99.99th=[51643] 00:29:02.787 bw ( KiB/s): min= 2048, max= 2512, per=4.19%, avg=2280.80, stdev=122.90, samples=20 00:29:02.787 iops : min= 512, max= 628, avg=570.20, stdev=30.72, samples=20 00:29:02.787 lat (msec) : 20=5.31%, 50=94.48%, 100=0.21% 00:29:02.787 cpu : usr=98.31%, sys=1.26%, ctx=15, majf=0, minf=71 00:29:02.787 IO depths : 1=1.0%, 2=2.1%, 4=9.1%, 8=76.1%, 16=11.7%, 32=0.0%, >=64=0.0% 00:29:02.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.787 complete : 0=0.0%, 4=90.1%, 8=4.5%, 16=5.4%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.787 issued rwts: total=5709,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.787 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.787 filename2: (groupid=0, jobs=1): err= 0: pid=505232: Thu Jul 25 12:14:48 2024 00:29:02.787 read: IOPS=596, BW=2384KiB/s (2441kB/s)(23.3MiB/10025msec) 00:29:02.787 slat (nsec): min=3282, max=30927, avg=11841.07, stdev=3949.92 00:29:02.787 clat (usec): min=9234, max=49301, avg=26762.16, stdev=4587.99 00:29:02.787 lat (usec): min=9242, max=49326, avg=26774.00, stdev=4588.18 00:29:02.787 clat percentiles (usec): 00:29:02.787 | 1.00th=[14746], 5.00th=[22152], 10.00th=[23462], 20.00th=[24249], 00:29:02.787 | 30.00th=[24773], 40.00th=[25035], 50.00th=[25560], 60.00th=[26084], 00:29:02.787 | 70.00th=[26870], 80.00th=[30540], 90.00th=[33424], 95.00th=[35390], 00:29:02.787 | 99.00th=[42206], 99.50th=[42730], 99.90th=[49021], 99.95th=[49021], 00:29:02.787 | 99.99th=[49546] 00:29:02.787 bw ( KiB/s): min= 2176, max= 2512, per=4.38%, avg=2383.60, stdev=102.50, samples=20 00:29:02.787 iops : min= 544, max= 628, avg=595.90, stdev=25.62, samples=20 00:29:02.787 lat (msec) : 10=0.08%, 20=3.13%, 50=96.79% 00:29:02.787 cpu : usr=98.30%, sys=1.22%, ctx=20, majf=0, minf=68 00:29:02.787 IO depths : 1=0.3%, 2=0.8%, 4=7.4%, 8=78.1%, 16=13.4%, 32=0.0%, >=64=0.0% 00:29:02.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.787 complete : 0=0.0%, 4=90.0%, 8=5.3%, 16=4.7%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.787 issued rwts: total=5975,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.787 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.787 filename2: (groupid=0, jobs=1): err= 0: pid=505233: Thu Jul 25 12:14:48 2024 00:29:02.787 read: IOPS=631, BW=2524KiB/s (2585kB/s)(24.7MiB/10022msec) 00:29:02.787 slat (nsec): min=4213, max=55067, avg=11254.38, stdev=3712.47 00:29:02.787 clat (usec): min=9485, max=43687, avg=25281.29, stdev=3191.67 00:29:02.787 lat (usec): min=9493, max=43695, avg=25292.55, stdev=3191.80 00:29:02.787 clat percentiles (usec): 00:29:02.787 | 1.00th=[14746], 5.00th=[21890], 10.00th=[23200], 20.00th=[23987], 00:29:02.787 | 30.00th=[24511], 40.00th=[24773], 50.00th=[25035], 60.00th=[25297], 00:29:02.787 | 70.00th=[25822], 80.00th=[26084], 90.00th=[27132], 95.00th=[31589], 00:29:02.787 | 99.00th=[36439], 99.50th=[37487], 99.90th=[43779], 99.95th=[43779], 00:29:02.787 | 99.99th=[43779] 00:29:02.787 bw ( KiB/s): min= 2432, max= 2656, per=4.63%, avg=2523.60, stdev=60.68, samples=20 00:29:02.787 iops : min= 608, max= 664, avg=630.90, stdev=15.17, samples=20 00:29:02.787 lat (msec) : 10=0.11%, 20=3.79%, 50=96.09% 00:29:02.787 cpu : usr=98.30%, sys=1.27%, ctx=15, majf=0, minf=77 00:29:02.787 IO depths : 1=0.4%, 2=0.9%, 4=7.2%, 8=78.6%, 16=12.9%, 32=0.0%, >=64=0.0% 00:29:02.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.787 complete : 0=0.0%, 4=89.5%, 8=5.6%, 16=4.9%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.787 issued rwts: total=6325,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.787 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.787 filename2: (groupid=0, jobs=1): err= 0: pid=505234: Thu Jul 25 12:14:48 2024 00:29:02.787 read: IOPS=520, BW=2084KiB/s (2134kB/s)(20.4MiB/10005msec) 00:29:02.787 slat (nsec): min=6867, max=73372, avg=10787.88, stdev=5341.37 00:29:02.787 clat (usec): min=6548, max=58476, avg=30655.73, stdev=5978.43 00:29:02.787 lat (usec): min=6555, max=58501, avg=30666.52, stdev=5978.86 00:29:02.787 clat percentiles (usec): 00:29:02.787 | 1.00th=[17695], 5.00th=[21627], 10.00th=[23987], 20.00th=[25297], 00:29:02.787 | 30.00th=[26870], 40.00th=[29754], 50.00th=[31065], 60.00th=[32113], 00:29:02.787 | 70.00th=[33162], 80.00th=[34866], 90.00th=[37487], 95.00th=[40109], 00:29:02.787 | 99.00th=[48497], 99.50th=[50070], 99.90th=[58459], 99.95th=[58459], 00:29:02.787 | 99.99th=[58459] 00:29:02.787 bw ( KiB/s): min= 1763, max= 2312, per=3.82%, avg=2080.95, stdev=131.53, samples=20 00:29:02.787 iops : min= 440, max= 578, avg=520.20, stdev=32.98, samples=20 00:29:02.787 lat (msec) : 10=0.19%, 20=3.01%, 50=96.41%, 100=0.38% 00:29:02.787 cpu : usr=98.61%, sys=0.98%, ctx=15, majf=0, minf=67 00:29:02.787 IO depths : 1=0.1%, 2=0.6%, 4=6.8%, 8=77.4%, 16=15.0%, 32=0.0%, >=64=0.0% 00:29:02.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.787 complete : 0=0.0%, 4=90.4%, 8=6.0%, 16=3.6%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.787 issued rwts: total=5212,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.787 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.787 filename2: (groupid=0, jobs=1): err= 0: pid=505235: Thu Jul 25 12:14:48 2024 00:29:02.787 read: IOPS=568, BW=2272KiB/s (2327kB/s)(22.2MiB/10004msec) 00:29:02.787 slat (nsec): min=9663, max=85848, avg=48883.21, stdev=17209.15 00:29:02.787 clat (usec): min=4236, max=56499, avg=27913.60, stdev=5853.36 00:29:02.787 lat (usec): min=4275, max=56544, avg=27962.48, stdev=5854.02 00:29:02.787 clat percentiles (usec): 00:29:02.787 | 1.00th=[14615], 5.00th=[19530], 10.00th=[22938], 20.00th=[24249], 00:29:02.787 | 30.00th=[25035], 40.00th=[25560], 50.00th=[26084], 60.00th=[27919], 00:29:02.787 | 70.00th=[30540], 80.00th=[32637], 90.00th=[35390], 95.00th=[38536], 00:29:02.787 | 99.00th=[44827], 99.50th=[49546], 99.90th=[56361], 99.95th=[56361], 00:29:02.787 | 99.99th=[56361] 00:29:02.787 bw ( KiB/s): min= 2000, max= 2528, per=4.13%, avg=2248.00, stdev=159.53, samples=19 00:29:02.787 iops : min= 500, max= 632, avg=562.00, stdev=39.88, samples=19 00:29:02.787 lat (msec) : 10=0.21%, 20=5.35%, 50=93.98%, 100=0.46% 00:29:02.787 cpu : usr=98.56%, sys=1.01%, ctx=13, majf=0, minf=72 00:29:02.787 IO depths : 1=0.4%, 2=1.2%, 4=10.1%, 8=74.1%, 16=14.1%, 32=0.0%, >=64=0.0% 00:29:02.787 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.787 complete : 0=0.0%, 4=91.2%, 8=4.9%, 16=3.8%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:02.787 issued rwts: total=5683,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:02.787 latency : target=0, window=0, percentile=100.00%, depth=16 00:29:02.787 00:29:02.787 Run status group 0 (all jobs): 00:29:02.787 READ: bw=53.2MiB/s (55.7MB/s), 2019KiB/s-2524KiB/s (2068kB/s-2585kB/s), io=533MiB (559MB), run=10003-10026msec 00:29:02.787 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@113 -- # destroy_subsystems 0 1 2 00:29:02.787 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:02.787 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:02.787 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:02.787 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:02.787 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:02.787 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.787 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.787 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.787 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 2 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=2 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode2 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null2 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # NULL_DIF=1 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # bs=8k,16k,128k 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # numjobs=2 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # iodepth=8 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # runtime=5 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@115 -- # files=1 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@117 -- # create_subsystems 0 1 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@28 -- # local sub 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 0 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=0 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 1 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.788 bdev_null0 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.788 [2024-07-25 12:14:48.566415] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@30 -- # for sub in "$@" 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@31 -- # create_subsystem 1 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@18 -- # local sub_id=1 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null1 64 512 --md-size 16 --dif-type 1 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.788 bdev_null1 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode1 --serial-number 53313233-1 --allow-any-host 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode1 bdev_null1 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode1 -t tcp -a 10.0.0.2 -s 4420 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # fio /dev/fd/62 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@118 -- # create_json_sub_conf 0 1 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@51 -- # gen_nvmf_target_json 0 1 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # config=() 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@532 -- # local subsystem config 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:02.788 { 00:29:02.788 "params": { 00:29:02.788 "name": "Nvme$subsystem", 00:29:02.788 "trtype": "$TEST_TRANSPORT", 00:29:02.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:02.788 "adrfam": "ipv4", 00:29:02.788 "trsvcid": "$NVMF_PORT", 00:29:02.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:02.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:02.788 "hdgst": ${hdgst:-false}, 00:29:02.788 "ddgst": ${ddgst:-false} 00:29:02.788 }, 00:29:02.788 "method": "bdev_nvme_attach_controller" 00:29:02.788 } 00:29:02.788 EOF 00:29:02.788 )") 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@82 -- # gen_fio_conf 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@54 -- # local file 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@56 -- # cat 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1341 -- # shift 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file = 1 )) 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libasan 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@73 -- # cat 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:02.788 { 00:29:02.788 "params": { 00:29:02.788 "name": "Nvme$subsystem", 00:29:02.788 "trtype": "$TEST_TRANSPORT", 00:29:02.788 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:02.788 "adrfam": "ipv4", 00:29:02.788 "trsvcid": "$NVMF_PORT", 00:29:02.788 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:02.788 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:02.788 "hdgst": ${hdgst:-false}, 00:29:02.788 "ddgst": ${ddgst:-false} 00:29:02.788 }, 00:29:02.788 "method": "bdev_nvme_attach_controller" 00:29:02.788 } 00:29:02.788 EOF 00:29:02.788 )") 00:29:02.788 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file++ )) 00:29:02.789 12:14:48 nvmf_dif.fio_dif_rand_params -- target/dif.sh@72 -- # (( file <= files )) 00:29:02.789 12:14:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@554 -- # cat 00:29:02.789 12:14:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@556 -- # jq . 00:29:02.789 12:14:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@557 -- # IFS=, 00:29:02.789 12:14:48 nvmf_dif.fio_dif_rand_params -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:02.789 "params": { 00:29:02.789 "name": "Nvme0", 00:29:02.789 "trtype": "tcp", 00:29:02.789 "traddr": "10.0.0.2", 00:29:02.789 "adrfam": "ipv4", 00:29:02.789 "trsvcid": "4420", 00:29:02.789 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:02.789 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:02.789 "hdgst": false, 00:29:02.789 "ddgst": false 00:29:02.789 }, 00:29:02.789 "method": "bdev_nvme_attach_controller" 00:29:02.789 },{ 00:29:02.789 "params": { 00:29:02.789 "name": "Nvme1", 00:29:02.789 "trtype": "tcp", 00:29:02.789 "traddr": "10.0.0.2", 00:29:02.789 "adrfam": "ipv4", 00:29:02.789 "trsvcid": "4420", 00:29:02.789 "subnqn": "nqn.2016-06.io.spdk:cnode1", 00:29:02.789 "hostnqn": "nqn.2016-06.io.spdk:host1", 00:29:02.789 "hdgst": false, 00:29:02.789 "ddgst": false 00:29:02.789 }, 00:29:02.789 "method": "bdev_nvme_attach_controller" 00:29:02.789 }' 00:29:02.789 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:02.789 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:02.789 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:02.789 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:02.789 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:02.789 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:02.789 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:02.789 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:02.789 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:02.789 12:14:48 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:02.789 filename0: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:02.789 ... 00:29:02.789 filename1: (g=0): rw=randread, bs=(R) 8192B-8192B, (W) 16.0KiB-16.0KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=8 00:29:02.789 ... 00:29:02.789 fio-3.35 00:29:02.789 Starting 4 threads 00:29:02.789 EAL: No free 2048 kB hugepages reported on node 1 00:29:08.074 00:29:08.074 filename0: (groupid=0, jobs=1): err= 0: pid=507081: Thu Jul 25 12:14:54 2024 00:29:08.074 read: IOPS=2618, BW=20.5MiB/s (21.4MB/s)(102MiB/5003msec) 00:29:08.074 slat (nsec): min=6094, max=63451, avg=12115.30, stdev=7877.01 00:29:08.074 clat (usec): min=1668, max=14366, avg=3024.40, stdev=498.69 00:29:08.074 lat (usec): min=1680, max=14405, avg=3036.52, stdev=498.80 00:29:08.074 clat percentiles (usec): 00:29:08.074 | 1.00th=[ 2073], 5.00th=[ 2311], 10.00th=[ 2474], 20.00th=[ 2671], 00:29:08.074 | 30.00th=[ 2835], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3097], 00:29:08.074 | 70.00th=[ 3195], 80.00th=[ 3326], 90.00th=[ 3556], 95.00th=[ 3752], 00:29:08.074 | 99.00th=[ 4113], 99.50th=[ 4293], 99.90th=[ 4686], 99.95th=[13960], 00:29:08.074 | 99.99th=[14353] 00:29:08.074 bw ( KiB/s): min=20112, max=21712, per=25.24%, avg=20948.80, stdev=474.64, samples=10 00:29:08.074 iops : min= 2514, max= 2714, avg=2618.60, stdev=59.33, samples=10 00:29:08.074 lat (msec) : 2=0.53%, 4=97.64%, 10=1.77%, 20=0.06% 00:29:08.074 cpu : usr=97.20%, sys=2.48%, ctx=7, majf=0, minf=70 00:29:08.074 IO depths : 1=0.1%, 2=0.8%, 4=66.3%, 8=32.8%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:08.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.074 complete : 0=0.0%, 4=96.2%, 8=3.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.074 issued rwts: total=13098,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:08.074 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:08.074 filename0: (groupid=0, jobs=1): err= 0: pid=507082: Thu Jul 25 12:14:54 2024 00:29:08.074 read: IOPS=2577, BW=20.1MiB/s (21.1MB/s)(101MiB/5001msec) 00:29:08.074 slat (nsec): min=5932, max=53867, avg=13046.83, stdev=7617.68 00:29:08.074 clat (usec): min=1660, max=7767, avg=3071.77, stdev=427.29 00:29:08.074 lat (usec): min=1686, max=7790, avg=3084.82, stdev=427.13 00:29:08.074 clat percentiles (usec): 00:29:08.074 | 1.00th=[ 2180], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2769], 00:29:08.074 | 30.00th=[ 2900], 40.00th=[ 2999], 50.00th=[ 3032], 60.00th=[ 3097], 00:29:08.074 | 70.00th=[ 3228], 80.00th=[ 3359], 90.00th=[ 3621], 95.00th=[ 3818], 00:29:08.074 | 99.00th=[ 4178], 99.50th=[ 4293], 99.90th=[ 4817], 99.95th=[ 7373], 00:29:08.074 | 99.99th=[ 7701] 00:29:08.074 bw ( KiB/s): min=19664, max=21184, per=24.79%, avg=20574.22, stdev=419.71, samples=9 00:29:08.074 iops : min= 2458, max= 2648, avg=2571.78, stdev=52.46, samples=9 00:29:08.074 lat (msec) : 2=0.19%, 4=97.56%, 10=2.26% 00:29:08.074 cpu : usr=96.12%, sys=3.08%, ctx=56, majf=0, minf=85 00:29:08.074 IO depths : 1=0.1%, 2=0.9%, 4=65.9%, 8=33.1%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:08.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.074 complete : 0=0.0%, 4=96.5%, 8=3.5%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.074 issued rwts: total=12888,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:08.074 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:08.074 filename1: (groupid=0, jobs=1): err= 0: pid=507083: Thu Jul 25 12:14:54 2024 00:29:08.074 read: IOPS=2600, BW=20.3MiB/s (21.3MB/s)(102MiB/5003msec) 00:29:08.074 slat (nsec): min=5949, max=53745, avg=11754.53, stdev=6790.31 00:29:08.074 clat (usec): min=1802, max=10473, avg=3046.97, stdev=458.80 00:29:08.074 lat (usec): min=1809, max=10497, avg=3058.72, stdev=458.53 00:29:08.074 clat percentiles (usec): 00:29:08.074 | 1.00th=[ 2114], 5.00th=[ 2343], 10.00th=[ 2474], 20.00th=[ 2704], 00:29:08.074 | 30.00th=[ 2868], 40.00th=[ 2966], 50.00th=[ 3032], 60.00th=[ 3097], 00:29:08.074 | 70.00th=[ 3228], 80.00th=[ 3359], 90.00th=[ 3589], 95.00th=[ 3785], 00:29:08.074 | 99.00th=[ 4178], 99.50th=[ 4359], 99.90th=[ 5014], 99.95th=[ 9896], 00:29:08.074 | 99.99th=[10421] 00:29:08.074 bw ( KiB/s): min=19840, max=21232, per=25.07%, avg=20804.80, stdev=422.65, samples=10 00:29:08.074 iops : min= 2480, max= 2654, avg=2600.60, stdev=52.83, samples=10 00:29:08.074 lat (msec) : 2=0.17%, 4=97.80%, 10=2.01%, 20=0.02% 00:29:08.074 cpu : usr=97.12%, sys=2.38%, ctx=96, majf=0, minf=89 00:29:08.074 IO depths : 1=0.1%, 2=1.0%, 4=66.2%, 8=32.7%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:08.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.074 complete : 0=0.0%, 4=96.1%, 8=3.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.074 issued rwts: total=13008,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:08.074 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:08.074 filename1: (groupid=0, jobs=1): err= 0: pid=507084: Thu Jul 25 12:14:54 2024 00:29:08.074 read: IOPS=2579, BW=20.2MiB/s (21.1MB/s)(101MiB/5001msec) 00:29:08.074 slat (nsec): min=6104, max=63190, avg=12296.31, stdev=8225.45 00:29:08.074 clat (usec): min=1706, max=8846, avg=3070.37, stdev=438.25 00:29:08.074 lat (usec): min=1713, max=8872, avg=3082.66, stdev=438.23 00:29:08.074 clat percentiles (usec): 00:29:08.074 | 1.00th=[ 2147], 5.00th=[ 2376], 10.00th=[ 2540], 20.00th=[ 2737], 00:29:08.074 | 30.00th=[ 2868], 40.00th=[ 2999], 50.00th=[ 3064], 60.00th=[ 3130], 00:29:08.074 | 70.00th=[ 3261], 80.00th=[ 3392], 90.00th=[ 3621], 95.00th=[ 3785], 00:29:08.074 | 99.00th=[ 4146], 99.50th=[ 4293], 99.90th=[ 4686], 99.95th=[ 8455], 00:29:08.074 | 99.99th=[ 8717] 00:29:08.074 bw ( KiB/s): min=19664, max=21232, per=24.90%, avg=20659.56, stdev=469.88, samples=9 00:29:08.074 iops : min= 2458, max= 2654, avg=2582.44, stdev=58.73, samples=9 00:29:08.074 lat (msec) : 2=0.26%, 4=97.73%, 10=2.01% 00:29:08.074 cpu : usr=97.64%, sys=2.04%, ctx=9, majf=0, minf=96 00:29:08.074 IO depths : 1=0.1%, 2=1.0%, 4=66.4%, 8=32.5%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:08.074 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.074 complete : 0=0.0%, 4=96.1%, 8=3.9%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:08.074 issued rwts: total=12899,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:08.074 latency : target=0, window=0, percentile=100.00%, depth=8 00:29:08.074 00:29:08.074 Run status group 0 (all jobs): 00:29:08.074 READ: bw=81.0MiB/s (85.0MB/s), 20.1MiB/s-20.5MiB/s (21.1MB/s-21.4MB/s), io=405MiB (425MB), run=5001-5003msec 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@119 -- # destroy_subsystems 0 1 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@43 -- # local sub 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=0 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@45 -- # for sub in "$@" 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@46 -- # destroy_subsystem 1 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@36 -- # local sub_id=1 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode1 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null1 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.074 00:29:08.074 real 0m24.160s 00:29:08.074 user 4m51.278s 00:29:08.074 sys 0m4.790s 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:08.074 12:14:54 nvmf_dif.fio_dif_rand_params -- common/autotest_common.sh@10 -- # set +x 00:29:08.074 ************************************ 00:29:08.074 END TEST fio_dif_rand_params 00:29:08.074 ************************************ 00:29:08.074 12:14:54 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:08.074 12:14:54 nvmf_dif -- target/dif.sh@144 -- # run_test fio_dif_digest fio_dif_digest 00:29:08.074 12:14:54 nvmf_dif -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:08.074 12:14:54 nvmf_dif -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:08.074 12:14:54 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:08.074 ************************************ 00:29:08.074 START TEST fio_dif_digest 00:29:08.075 ************************************ 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1123 -- # fio_dif_digest 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@123 -- # local NULL_DIF 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@124 -- # local bs numjobs runtime iodepth files 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@125 -- # local hdgst ddgst 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # NULL_DIF=3 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # bs=128k,128k,128k 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # numjobs=3 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # iodepth=3 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@127 -- # runtime=10 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # hdgst=true 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@128 -- # ddgst=true 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@130 -- # create_subsystems 0 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@28 -- # local sub 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@30 -- # for sub in "$@" 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@31 -- # create_subsystem 0 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@18 -- # local sub_id=0 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@21 -- # rpc_cmd bdev_null_create bdev_null0 64 512 --md-size 16 --dif-type 3 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:08.075 bdev_null0 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@22 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:cnode0 --serial-number 53313233-0 --allow-any-host 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@23 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:cnode0 bdev_null0 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@24 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:cnode0 -t tcp -a 10.0.0.2 -s 4420 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:08.075 [2024-07-25 12:14:54.983262] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # fio /dev/fd/62 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@131 -- # create_json_sub_conf 0 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@51 -- # gen_nvmf_target_json 0 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # config=() 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@532 -- # local subsystem config 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@534 -- # for subsystem in "${@:-1}" 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1356 -- # fio_plugin /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@82 -- # gen_fio_conf 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # config+=("$(cat <<-EOF 00:29:08.075 { 00:29:08.075 "params": { 00:29:08.075 "name": "Nvme$subsystem", 00:29:08.075 "trtype": "$TEST_TRANSPORT", 00:29:08.075 "traddr": "$NVMF_FIRST_TARGET_IP", 00:29:08.075 "adrfam": "ipv4", 00:29:08.075 "trsvcid": "$NVMF_PORT", 00:29:08.075 "subnqn": "nqn.2016-06.io.spdk:cnode$subsystem", 00:29:08.075 "hostnqn": "nqn.2016-06.io.spdk:host$subsystem", 00:29:08.075 "hdgst": ${hdgst:-false}, 00:29:08.075 "ddgst": ${ddgst:-false} 00:29:08.075 }, 00:29:08.075 "method": "bdev_nvme_attach_controller" 00:29:08.075 } 00:29:08.075 EOF 00:29:08.075 )") 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1337 -- # local fio_dir=/usr/src/fio 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@54 -- # local file 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@56 -- # cat 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1339 -- # local sanitizers 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1340 -- # local plugin=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1341 -- # shift 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1343 -- # local asan_lib= 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@554 -- # cat 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file = 1 )) 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- target/dif.sh@72 -- # (( file <= files )) 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libasan 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@556 -- # jq . 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@557 -- # IFS=, 00:29:08.075 12:14:54 nvmf_dif.fio_dif_digest -- nvmf/common.sh@558 -- # printf '%s\n' '{ 00:29:08.075 "params": { 00:29:08.075 "name": "Nvme0", 00:29:08.075 "trtype": "tcp", 00:29:08.075 "traddr": "10.0.0.2", 00:29:08.075 "adrfam": "ipv4", 00:29:08.075 "trsvcid": "4420", 00:29:08.075 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:29:08.075 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:29:08.075 "hdgst": true, 00:29:08.075 "ddgst": true 00:29:08.075 }, 00:29:08.075 "method": "bdev_nvme_attach_controller" 00:29:08.075 }' 00:29:08.075 12:14:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:08.075 12:14:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:08.075 12:14:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1344 -- # for sanitizer in "${sanitizers[@]}" 00:29:08.075 12:14:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # ldd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev 00:29:08.075 12:14:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # grep libclang_rt.asan 00:29:08.075 12:14:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # awk '{print $3}' 00:29:08.075 12:14:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1345 -- # asan_lib= 00:29:08.075 12:14:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1346 -- # [[ -n '' ]] 00:29:08.075 12:14:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # LD_PRELOAD=' /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/fio/spdk_bdev' 00:29:08.075 12:14:55 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1352 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf /dev/fd/62 /dev/fd/61 00:29:08.334 filename0: (g=0): rw=randread, bs=(R) 128KiB-128KiB, (W) 128KiB-128KiB, (T) 128KiB-128KiB, ioengine=spdk_bdev, iodepth=3 00:29:08.334 ... 00:29:08.334 fio-3.35 00:29:08.334 Starting 3 threads 00:29:08.334 EAL: No free 2048 kB hugepages reported on node 1 00:29:20.543 00:29:20.543 filename0: (groupid=0, jobs=1): err= 0: pid=508243: Thu Jul 25 12:15:05 2024 00:29:20.543 read: IOPS=307, BW=38.5MiB/s (40.3MB/s)(385MiB/10008msec) 00:29:20.543 slat (nsec): min=6408, max=26780, avg=10739.73, stdev=2332.61 00:29:20.543 clat (usec): min=5692, max=94236, avg=9736.62, stdev=6165.58 00:29:20.543 lat (usec): min=5700, max=94249, avg=9747.36, stdev=6165.88 00:29:20.543 clat percentiles (usec): 00:29:20.543 | 1.00th=[ 6063], 5.00th=[ 6390], 10.00th=[ 6783], 20.00th=[ 7439], 00:29:20.543 | 30.00th=[ 7898], 40.00th=[ 8455], 50.00th=[ 8848], 60.00th=[ 9503], 00:29:20.543 | 70.00th=[ 9896], 80.00th=[10421], 90.00th=[11207], 95.00th=[12256], 00:29:20.543 | 99.00th=[51643], 99.50th=[53216], 99.90th=[56361], 99.95th=[57410], 00:29:20.543 | 99.99th=[93848] 00:29:20.543 bw ( KiB/s): min=29440, max=46592, per=46.67%, avg=39385.60, stdev=4487.62, samples=20 00:29:20.543 iops : min= 230, max= 364, avg=307.70, stdev=35.06, samples=20 00:29:20.543 lat (msec) : 10=72.13%, 20=26.05%, 50=0.23%, 100=1.59% 00:29:20.543 cpu : usr=94.47%, sys=5.14%, ctx=14, majf=0, minf=157 00:29:20.543 IO depths : 1=0.1%, 2=99.9%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:20.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:20.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:20.543 issued rwts: total=3079,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:20.543 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:20.543 filename0: (groupid=0, jobs=1): err= 0: pid=508244: Thu Jul 25 12:15:05 2024 00:29:20.543 read: IOPS=147, BW=18.4MiB/s (19.3MB/s)(185MiB/10053msec) 00:29:20.543 slat (nsec): min=6480, max=25280, avg=11320.93, stdev=2141.86 00:29:20.543 clat (msec): min=5, max=104, avg=20.32, stdev=14.01 00:29:20.543 lat (msec): min=5, max=104, avg=20.33, stdev=14.01 00:29:20.543 clat percentiles (msec): 00:29:20.543 | 1.00th=[ 7], 5.00th=[ 8], 10.00th=[ 9], 20.00th=[ 11], 00:29:20.543 | 30.00th=[ 15], 40.00th=[ 16], 50.00th=[ 17], 60.00th=[ 18], 00:29:20.543 | 70.00th=[ 20], 80.00th=[ 23], 90.00th=[ 37], 95.00th=[ 58], 00:29:20.543 | 99.00th=[ 66], 99.50th=[ 69], 99.90th=[ 103], 99.95th=[ 105], 00:29:20.543 | 99.99th=[ 105] 00:29:20.543 bw ( KiB/s): min=13056, max=25856, per=22.43%, avg=18931.20, stdev=2770.28, samples=20 00:29:20.543 iops : min= 102, max= 202, avg=147.90, stdev=21.64, samples=20 00:29:20.543 lat (msec) : 10=15.73%, 20=58.07%, 50=16.48%, 100=9.59%, 250=0.14% 00:29:20.543 cpu : usr=96.32%, sys=3.31%, ctx=13, majf=0, minf=113 00:29:20.543 IO depths : 1=0.8%, 2=99.2%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:20.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:20.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:20.543 issued rwts: total=1481,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:20.543 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:20.543 filename0: (groupid=0, jobs=1): err= 0: pid=508245: Thu Jul 25 12:15:05 2024 00:29:20.543 read: IOPS=205, BW=25.7MiB/s (27.0MB/s)(259MiB/10051msec) 00:29:20.543 slat (nsec): min=6503, max=35410, avg=11465.59, stdev=2011.50 00:29:20.543 clat (usec): min=6124, max=98160, avg=14545.28, stdev=12617.51 00:29:20.543 lat (usec): min=6131, max=98172, avg=14556.74, stdev=12617.59 00:29:20.543 clat percentiles (usec): 00:29:20.543 | 1.00th=[ 6849], 5.00th=[ 8029], 10.00th=[ 8586], 20.00th=[ 9503], 00:29:20.543 | 30.00th=[ 9896], 40.00th=[10290], 50.00th=[10683], 60.00th=[11076], 00:29:20.543 | 70.00th=[11600], 80.00th=[12649], 90.00th=[16319], 95.00th=[54264], 00:29:20.543 | 99.00th=[57410], 99.50th=[58459], 99.90th=[60031], 99.95th=[60556], 00:29:20.543 | 99.99th=[98042] 00:29:20.543 bw ( KiB/s): min=19200, max=33024, per=31.34%, avg=26447.45, stdev=3335.22, samples=20 00:29:20.543 iops : min= 150, max= 258, avg=206.60, stdev=26.05, samples=20 00:29:20.543 lat (msec) : 10=31.24%, 20=60.01%, 50=0.34%, 100=8.41% 00:29:20.543 cpu : usr=95.94%, sys=3.68%, ctx=13, majf=0, minf=118 00:29:20.543 IO depths : 1=0.1%, 2=100.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:29:20.543 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:20.543 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:29:20.543 issued rwts: total=2068,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:29:20.543 latency : target=0, window=0, percentile=100.00%, depth=3 00:29:20.543 00:29:20.543 Run status group 0 (all jobs): 00:29:20.543 READ: bw=82.4MiB/s (86.4MB/s), 18.4MiB/s-38.5MiB/s (19.3MB/s-40.3MB/s), io=829MiB (869MB), run=10008-10053msec 00:29:20.543 12:15:05 nvmf_dif.fio_dif_digest -- target/dif.sh@132 -- # destroy_subsystems 0 00:29:20.543 12:15:05 nvmf_dif.fio_dif_digest -- target/dif.sh@43 -- # local sub 00:29:20.543 12:15:05 nvmf_dif.fio_dif_digest -- target/dif.sh@45 -- # for sub in "$@" 00:29:20.543 12:15:05 nvmf_dif.fio_dif_digest -- target/dif.sh@46 -- # destroy_subsystem 0 00:29:20.543 12:15:05 nvmf_dif.fio_dif_digest -- target/dif.sh@36 -- # local sub_id=0 00:29:20.543 12:15:05 nvmf_dif.fio_dif_digest -- target/dif.sh@38 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:cnode0 00:29:20.543 12:15:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.543 12:15:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:20.543 12:15:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.543 12:15:05 nvmf_dif.fio_dif_digest -- target/dif.sh@39 -- # rpc_cmd bdev_null_delete bdev_null0 00:29:20.543 12:15:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:20.543 12:15:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:20.543 12:15:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:20.543 00:29:20.543 real 0m11.029s 00:29:20.543 user 0m35.290s 00:29:20.543 sys 0m1.490s 00:29:20.543 12:15:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:20.543 12:15:05 nvmf_dif.fio_dif_digest -- common/autotest_common.sh@10 -- # set +x 00:29:20.544 ************************************ 00:29:20.544 END TEST fio_dif_digest 00:29:20.544 ************************************ 00:29:20.544 12:15:06 nvmf_dif -- common/autotest_common.sh@1142 -- # return 0 00:29:20.544 12:15:06 nvmf_dif -- target/dif.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:29:20.544 12:15:06 nvmf_dif -- target/dif.sh@147 -- # nvmftestfini 00:29:20.544 12:15:06 nvmf_dif -- nvmf/common.sh@488 -- # nvmfcleanup 00:29:20.544 12:15:06 nvmf_dif -- nvmf/common.sh@117 -- # sync 00:29:20.544 12:15:06 nvmf_dif -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:29:20.544 12:15:06 nvmf_dif -- nvmf/common.sh@120 -- # set +e 00:29:20.544 12:15:06 nvmf_dif -- nvmf/common.sh@121 -- # for i in {1..20} 00:29:20.544 12:15:06 nvmf_dif -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:29:20.544 rmmod nvme_tcp 00:29:20.544 rmmod nvme_fabrics 00:29:20.544 rmmod nvme_keyring 00:29:20.544 12:15:06 nvmf_dif -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:29:20.544 12:15:06 nvmf_dif -- nvmf/common.sh@124 -- # set -e 00:29:20.544 12:15:06 nvmf_dif -- nvmf/common.sh@125 -- # return 0 00:29:20.544 12:15:06 nvmf_dif -- nvmf/common.sh@489 -- # '[' -n 499645 ']' 00:29:20.544 12:15:06 nvmf_dif -- nvmf/common.sh@490 -- # killprocess 499645 00:29:20.544 12:15:06 nvmf_dif -- common/autotest_common.sh@948 -- # '[' -z 499645 ']' 00:29:20.544 12:15:06 nvmf_dif -- common/autotest_common.sh@952 -- # kill -0 499645 00:29:20.544 12:15:06 nvmf_dif -- common/autotest_common.sh@953 -- # uname 00:29:20.544 12:15:06 nvmf_dif -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:20.544 12:15:06 nvmf_dif -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 499645 00:29:20.544 12:15:06 nvmf_dif -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:20.544 12:15:06 nvmf_dif -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:20.544 12:15:06 nvmf_dif -- common/autotest_common.sh@966 -- # echo 'killing process with pid 499645' 00:29:20.544 killing process with pid 499645 00:29:20.544 12:15:06 nvmf_dif -- common/autotest_common.sh@967 -- # kill 499645 00:29:20.544 12:15:06 nvmf_dif -- common/autotest_common.sh@972 -- # wait 499645 00:29:20.544 12:15:06 nvmf_dif -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:29:20.544 12:15:06 nvmf_dif -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:21.925 Waiting for block devices as requested 00:29:21.925 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:21.925 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:21.925 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:21.925 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:21.925 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:22.184 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:22.184 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:22.184 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:22.184 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:22.444 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:22.444 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:22.444 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:22.705 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:22.705 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:22.705 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:22.705 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:22.965 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:22.965 12:15:10 nvmf_dif -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:29:22.965 12:15:10 nvmf_dif -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:29:22.965 12:15:10 nvmf_dif -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:29:22.965 12:15:10 nvmf_dif -- nvmf/common.sh@278 -- # remove_spdk_ns 00:29:22.965 12:15:10 nvmf_dif -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:22.965 12:15:10 nvmf_dif -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:22.965 12:15:10 nvmf_dif -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:24.872 12:15:12 nvmf_dif -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:29:25.132 00:29:25.132 real 1m12.586s 00:29:25.132 user 7m8.510s 00:29:25.132 sys 0m18.038s 00:29:25.132 12:15:12 nvmf_dif -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:25.132 12:15:12 nvmf_dif -- common/autotest_common.sh@10 -- # set +x 00:29:25.132 ************************************ 00:29:25.132 END TEST nvmf_dif 00:29:25.132 ************************************ 00:29:25.132 12:15:12 -- common/autotest_common.sh@1142 -- # return 0 00:29:25.132 12:15:12 -- spdk/autotest.sh@293 -- # run_test nvmf_abort_qd_sizes /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:25.132 12:15:12 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:25.132 12:15:12 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:25.132 12:15:12 -- common/autotest_common.sh@10 -- # set +x 00:29:25.132 ************************************ 00:29:25.132 START TEST nvmf_abort_qd_sizes 00:29:25.132 ************************************ 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target/abort_qd_sizes.sh 00:29:25.132 * Looking for test storage... 00:29:25.132 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/target 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@14 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # uname -s 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- paths/export.sh@5 -- # export PATH 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@47 -- # : 0 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@51 -- # have_pci_nics=0 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@70 -- # nvmftestinit 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@441 -- # '[' -z tcp ']' 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@446 -- # trap nvmftestfini SIGINT SIGTERM EXIT 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@448 -- # prepare_net_devs 00:29:25.132 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@410 -- # local -g is_hw=no 00:29:25.133 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@412 -- # remove_spdk_ns 00:29:25.133 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:29:25.133 12:15:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:29:25.133 12:15:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:29:25.133 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # [[ phy != virt ]] 00:29:25.133 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # gather_supported_nvmf_pci_devs 00:29:25.133 12:15:12 nvmf_abort_qd_sizes -- nvmf/common.sh@285 -- # xtrace_disable 00:29:25.133 12:15:12 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@289 -- # local intel=0x8086 mellanox=0x15b3 pci net_dev 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # pci_devs=() 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@291 -- # local -a pci_devs 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # pci_net_devs=() 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@292 -- # local -a pci_net_devs 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # pci_drivers=() 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@293 -- # local -A pci_drivers 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # net_devs=() 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@295 -- # local -ga net_devs 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # e810=() 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@296 -- # local -ga e810 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # x722=() 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@297 -- # local -ga x722 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # mlx=() 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@298 -- # local -ga mlx 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@301 -- # e810+=(${pci_bus_cache["$intel:0x1592"]}) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@302 -- # e810+=(${pci_bus_cache["$intel:0x159b"]}) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@304 -- # x722+=(${pci_bus_cache["$intel:0x37d2"]}) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@306 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2dc"]}) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@308 -- # mlx+=(${pci_bus_cache["$mellanox:0x1021"]}) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@310 -- # mlx+=(${pci_bus_cache["$mellanox:0xa2d6"]}) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@312 -- # mlx+=(${pci_bus_cache["$mellanox:0x101d"]}) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@314 -- # mlx+=(${pci_bus_cache["$mellanox:0x1017"]}) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@315 -- # mlx+=(${pci_bus_cache["$mellanox:0x1019"]}) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@317 -- # mlx+=(${pci_bus_cache["$mellanox:0x1015"]}) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@318 -- # mlx+=(${pci_bus_cache["$mellanox:0x1013"]}) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@320 -- # pci_devs+=("${e810[@]}") 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@321 -- # [[ tcp == rdma ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@327 -- # [[ e810 == mlx5 ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@329 -- # [[ e810 == e810 ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@330 -- # pci_devs=("${e810[@]}") 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@335 -- # (( 2 == 0 )) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.0 (0x8086 - 0x159b)' 00:29:30.445 Found 0000:86:00.0 (0x8086 - 0x159b) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@340 -- # for pci in "${pci_devs[@]}" 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@341 -- # echo 'Found 0000:86:00.1 (0x8086 - 0x159b)' 00:29:30.445 Found 0000:86:00.1 (0x8086 - 0x159b) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@342 -- # [[ ice == unknown ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@346 -- # [[ ice == unbound ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@350 -- # [[ 0x159b == \0\x\1\0\1\7 ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@351 -- # [[ 0x159b == \0\x\1\0\1\9 ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@352 -- # [[ tcp == rdma ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@366 -- # (( 0 > 0 )) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ e810 == e810 ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@372 -- # [[ tcp == rdma ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.0: cvl_0_0' 00:29:30.445 Found net devices under 0000:86:00.0: cvl_0_0 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@382 -- # for pci in "${pci_devs[@]}" 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@383 -- # pci_net_devs=("/sys/bus/pci/devices/$pci/net/"*) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@388 -- # [[ tcp == tcp ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@389 -- # for net_dev in "${!pci_net_devs[@]}" 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@390 -- # [[ up == up ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@394 -- # (( 1 == 0 )) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@399 -- # pci_net_devs=("${pci_net_devs[@]##*/}") 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@400 -- # echo 'Found net devices under 0000:86:00.1: cvl_0_1' 00:29:30.445 Found net devices under 0000:86:00.1: cvl_0_1 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@401 -- # net_devs+=("${pci_net_devs[@]}") 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@404 -- # (( 2 == 0 )) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@414 -- # is_hw=yes 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@416 -- # [[ yes == yes ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@417 -- # [[ tcp == tcp ]] 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@418 -- # nvmf_tcp_init 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@229 -- # NVMF_INITIATOR_IP=10.0.0.1 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@230 -- # NVMF_FIRST_TARGET_IP=10.0.0.2 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@231 -- # TCP_INTERFACE_LIST=("${net_devs[@]}") 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@234 -- # (( 2 > 1 )) 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@236 -- # NVMF_TARGET_INTERFACE=cvl_0_0 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@237 -- # NVMF_INITIATOR_INTERFACE=cvl_0_1 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@240 -- # NVMF_SECOND_TARGET_IP= 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@242 -- # NVMF_TARGET_NAMESPACE=cvl_0_0_ns_spdk 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@243 -- # NVMF_TARGET_NS_CMD=(ip netns exec "$NVMF_TARGET_NAMESPACE") 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@244 -- # ip -4 addr flush cvl_0_0 00:29:30.445 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@245 -- # ip -4 addr flush cvl_0_1 00:29:30.446 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@248 -- # ip netns add cvl_0_0_ns_spdk 00:29:30.446 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@251 -- # ip link set cvl_0_0 netns cvl_0_0_ns_spdk 00:29:30.446 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@254 -- # ip addr add 10.0.0.1/24 dev cvl_0_1 00:29:30.446 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@255 -- # ip netns exec cvl_0_0_ns_spdk ip addr add 10.0.0.2/24 dev cvl_0_0 00:29:30.446 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@258 -- # ip link set cvl_0_1 up 00:29:30.446 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@260 -- # ip netns exec cvl_0_0_ns_spdk ip link set cvl_0_0 up 00:29:30.446 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@261 -- # ip netns exec cvl_0_0_ns_spdk ip link set lo up 00:29:30.446 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@264 -- # iptables -I INPUT 1 -i cvl_0_1 -p tcp --dport 4420 -j ACCEPT 00:29:30.446 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@267 -- # ping -c 1 10.0.0.2 00:29:30.446 PING 10.0.0.2 (10.0.0.2) 56(84) bytes of data. 00:29:30.446 64 bytes from 10.0.0.2: icmp_seq=1 ttl=64 time=0.271 ms 00:29:30.446 00:29:30.446 --- 10.0.0.2 ping statistics --- 00:29:30.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.446 rtt min/avg/max/mdev = 0.271/0.271/0.271/0.000 ms 00:29:30.446 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@268 -- # ip netns exec cvl_0_0_ns_spdk ping -c 1 10.0.0.1 00:29:30.446 PING 10.0.0.1 (10.0.0.1) 56(84) bytes of data. 00:29:30.446 64 bytes from 10.0.0.1: icmp_seq=1 ttl=64 time=0.513 ms 00:29:30.446 00:29:30.446 --- 10.0.0.1 ping statistics --- 00:29:30.446 1 packets transmitted, 1 received, 0% packet loss, time 0ms 00:29:30.446 rtt min/avg/max/mdev = 0.513/0.513/0.513/0.000 ms 00:29:30.446 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@270 -- # NVMF_APP=("${NVMF_TARGET_NS_CMD[@]}" "${NVMF_APP[@]}") 00:29:30.446 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@422 -- # return 0 00:29:30.446 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@450 -- # '[' iso == iso ']' 00:29:30.446 12:15:17 nvmf_abort_qd_sizes -- nvmf/common.sh@451 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:29:32.986 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:32.986 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:32.986 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:32.986 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:32.986 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:32.986 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:32.986 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:32.986 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:32.986 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:29:33.244 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:29:33.244 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:29:33.244 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:29:33.244 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:29:33.244 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:29:33.244 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:29:33.244 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:29:34.214 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@454 -- # NVMF_TRANSPORT_OPTS='-t tcp' 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@455 -- # [[ tcp == \r\d\m\a ]] 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@464 -- # [[ tcp == \t\c\p ]] 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@465 -- # NVMF_TRANSPORT_OPTS='-t tcp -o' 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@468 -- # '[' tcp == tcp ']' 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@474 -- # modprobe nvme-tcp 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@71 -- # nvmfappstart -m 0xf 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@479 -- # timing_enter start_nvmf_tgt 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@722 -- # xtrace_disable 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@481 -- # nvmfpid=516508 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@482 -- # waitforlisten 516508 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- nvmf/common.sh@480 -- # ip netns exec cvl_0_0_ns_spdk /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/nvmf_tgt -i 0 -e 0xFFFF -m 0xf 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@829 -- # '[' -z 516508 ']' 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@834 -- # local max_retries=100 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:34.214 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@838 -- # xtrace_disable 00:29:34.214 12:15:21 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:34.214 [2024-07-25 12:15:21.312461] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:29:34.214 [2024-07-25 12:15:21.312508] [ DPDK EAL parameters: nvmf -c 0xf --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:29:34.214 EAL: No free 2048 kB hugepages reported on node 1 00:29:34.214 [2024-07-25 12:15:21.370129] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 4 00:29:34.214 [2024-07-25 12:15:21.454303] app.c: 603:app_setup_trace: *NOTICE*: Tracepoint Group Mask 0xFFFF specified. 00:29:34.214 [2024-07-25 12:15:21.454339] app.c: 604:app_setup_trace: *NOTICE*: Use 'spdk_trace -s nvmf -i 0' to capture a snapshot of events at runtime. 00:29:34.214 [2024-07-25 12:15:21.454347] app.c: 609:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:29:34.214 [2024-07-25 12:15:21.454353] app.c: 610:app_setup_trace: *NOTICE*: SPDK application currently running. 00:29:34.214 [2024-07-25 12:15:21.454358] app.c: 611:app_setup_trace: *NOTICE*: Or copy /dev/shm/nvmf_trace.0 for offline analysis/debug. 00:29:34.214 [2024-07-25 12:15:21.454407] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:29:34.214 [2024-07-25 12:15:21.454427] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 2 00:29:34.214 [2024-07-25 12:15:21.454560] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 3 00:29:34.214 [2024-07-25 12:15:21.454561] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@862 -- # return 0 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@483 -- # timing_exit start_nvmf_tgt 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@728 -- # xtrace_disable 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- nvmf/common.sh@484 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini' SIGINT SIGTERM EXIT 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@73 -- # trap 'process_shm --id $NVMF_APP_SHM_ID || :; nvmftestfini || :; clean_kernel_target' SIGINT SIGTERM EXIT 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # mapfile -t nvmes 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@75 -- # nvme_in_userspace 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@309 -- # local bdf bdfs 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@310 -- # local nvmes 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@312 -- # [[ -n 0000:5e:00.0 ]] 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@313 -- # nvmes=(${pci_bus_cache["0x010802"]}) 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@318 -- # for bdf in "${nvmes[@]}" 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@319 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:5e:00.0 ]] 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # uname -s 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@320 -- # [[ Linux == FreeBSD ]] 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@323 -- # bdfs+=("$bdf") 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@325 -- # (( 1 )) 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- scripts/common.sh@326 -- # printf '%s\n' 0000:5e:00.0 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@76 -- # (( 1 > 0 )) 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@78 -- # nvme=0000:5e:00.0 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@80 -- # run_test spdk_target_abort spdk_target 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:35.154 12:15:22 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:35.154 ************************************ 00:29:35.154 START TEST spdk_target_abort 00:29:35.154 ************************************ 00:29:35.154 12:15:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1123 -- # spdk_target 00:29:35.154 12:15:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@43 -- # local name=spdk_target 00:29:35.154 12:15:22 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@45 -- # rpc_cmd bdev_nvme_attach_controller -t pcie -a 0000:5e:00.0 -b spdk_target 00:29:35.154 12:15:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:35.154 12:15:22 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:38.446 spdk_targetn1 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@47 -- # rpc_cmd nvmf_create_transport -t tcp -o -u 8192 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:38.446 [2024-07-25 12:15:25.035662] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@48 -- # rpc_cmd nvmf_create_subsystem nqn.2016-06.io.spdk:testnqn -a -s SPDKISFASTANDAWESOME 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@49 -- # rpc_cmd nvmf_subsystem_add_ns nqn.2016-06.io.spdk:testnqn spdk_targetn1 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@50 -- # rpc_cmd nvmf_subsystem_add_listener nqn.2016-06.io.spdk:testnqn -t tcp -a 10.0.0.2 -s 4420 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:38.446 [2024-07-25 12:15:25.068521] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 10.0.0.2 port 4420 *** 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@52 -- # rabort tcp IPv4 10.0.0.2 4420 nqn.2016-06.io.spdk:testnqn 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.2 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2' 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420' 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:38.446 12:15:25 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:38.446 EAL: No free 2048 kB hugepages reported on node 1 00:29:41.735 Initializing NVMe Controllers 00:29:41.735 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:41.735 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:41.735 Initialization complete. Launching workers. 00:29:41.735 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 5496, failed: 0 00:29:41.735 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1773, failed to submit 3723 00:29:41.735 success 859, unsuccess 914, failed 0 00:29:41.735 12:15:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:41.735 12:15:28 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:41.735 EAL: No free 2048 kB hugepages reported on node 1 00:29:45.029 Initializing NVMe Controllers 00:29:45.029 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:45.029 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:45.029 Initialization complete. Launching workers. 00:29:45.029 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 8666, failed: 0 00:29:45.029 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 1238, failed to submit 7428 00:29:45.029 success 367, unsuccess 871, failed 0 00:29:45.029 12:15:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:45.029 12:15:31 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.2 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:45.029 EAL: No free 2048 kB hugepages reported on node 1 00:29:47.566 Initializing NVMe Controllers 00:29:47.566 Attached to NVMe over Fabrics controller at 10.0.0.2:4420: nqn.2016-06.io.spdk:testnqn 00:29:47.566 Associating TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:47.566 Initialization complete. Launching workers. 00:29:47.566 NS: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 33566, failed: 0 00:29:47.566 CTRLR: TCP (addr:10.0.0.2 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 2737, failed to submit 30829 00:29:47.566 success 680, unsuccess 2057, failed 0 00:29:47.566 12:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@54 -- # rpc_cmd nvmf_delete_subsystem nqn.2016-06.io.spdk:testnqn 00:29:47.567 12:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.567 12:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:47.567 12:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:47.567 12:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@55 -- # rpc_cmd bdev_nvme_detach_controller spdk_target 00:29:47.567 12:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@559 -- # xtrace_disable 00:29:47.567 12:15:34 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:48.948 12:15:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:29:48.948 12:15:36 nvmf_abort_qd_sizes.spdk_target_abort -- target/abort_qd_sizes.sh@61 -- # killprocess 516508 00:29:48.948 12:15:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@948 -- # '[' -z 516508 ']' 00:29:48.948 12:15:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@952 -- # kill -0 516508 00:29:48.948 12:15:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # uname 00:29:48.948 12:15:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:29:48.948 12:15:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 516508 00:29:48.948 12:15:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:29:48.948 12:15:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:29:48.948 12:15:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@966 -- # echo 'killing process with pid 516508' 00:29:48.948 killing process with pid 516508 00:29:48.948 12:15:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@967 -- # kill 516508 00:29:48.948 12:15:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@972 -- # wait 516508 00:29:49.208 00:29:49.208 real 0m14.099s 00:29:49.208 user 0m56.182s 00:29:49.208 sys 0m2.259s 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.spdk_target_abort -- common/autotest_common.sh@10 -- # set +x 00:29:49.208 ************************************ 00:29:49.208 END TEST spdk_target_abort 00:29:49.208 ************************************ 00:29:49.208 12:15:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:29:49.208 12:15:36 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@81 -- # run_test kernel_target_abort kernel_target 00:29:49.208 12:15:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:29:49.208 12:15:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@1105 -- # xtrace_disable 00:29:49.208 12:15:36 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:29:49.208 ************************************ 00:29:49.208 START TEST kernel_target_abort 00:29:49.208 ************************************ 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1123 -- # kernel_target 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # get_main_ns_ip 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@741 -- # local ip 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # ip_candidates=() 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@742 -- # local -A ip_candidates 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@744 -- # ip_candidates["rdma"]=NVMF_FIRST_TARGET_IP 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@745 -- # ip_candidates["tcp"]=NVMF_INITIATOR_IP 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z tcp ]] 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@747 -- # [[ -z NVMF_INITIATOR_IP ]] 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@748 -- # ip=NVMF_INITIATOR_IP 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@750 -- # [[ -z 10.0.0.1 ]] 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@755 -- # echo 10.0.0.1 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@65 -- # configure_kernel_target nqn.2016-06.io.spdk:testnqn 10.0.0.1 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@632 -- # local kernel_name=nqn.2016-06.io.spdk:testnqn kernel_target_ip=10.0.0.1 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@634 -- # nvmet=/sys/kernel/config/nvmet 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@635 -- # kernel_subsystem=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@636 -- # kernel_namespace=/sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@637 -- # kernel_port=/sys/kernel/config/nvmet/ports/1 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@639 -- # local block nvme 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@641 -- # [[ ! -e /sys/module/nvmet ]] 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@642 -- # modprobe nvmet 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@645 -- # [[ -e /sys/kernel/config/nvmet ]] 00:29:49.208 12:15:36 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@647 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:29:51.748 Waiting for block devices as requested 00:29:51.748 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:29:51.748 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:52.007 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:52.007 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:52.007 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:52.007 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:52.266 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:52.266 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:52.266 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:52.525 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:29:52.525 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:29:52.525 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:29:52.525 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:29:52.784 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:29:52.784 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:29:52.784 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:29:53.044 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@650 -- # for block in /sys/block/nvme* 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@651 -- # [[ -e /sys/block/nvme0n1 ]] 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@652 -- # is_block_zoned nvme0n1 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1662 -- # local device=nvme0n1 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1664 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1665 -- # [[ none != none ]] 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # block_in_use nvme0n1 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@378 -- # local block=nvme0n1 pt 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@387 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/spdk-gpt.py nvme0n1 00:29:53.044 No valid GPT data, bailing 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@391 -- # pt= 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- scripts/common.sh@392 -- # return 1 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@653 -- # nvme=/dev/nvme0n1 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@656 -- # [[ -b /dev/nvme0n1 ]] 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@658 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@659 -- # mkdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@660 -- # mkdir /sys/kernel/config/nvmet/ports/1 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@665 -- # echo SPDK-nqn.2016-06.io.spdk:testnqn 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@667 -- # echo 1 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@668 -- # echo /dev/nvme0n1 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@669 -- # echo 1 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@671 -- # echo 10.0.0.1 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@672 -- # echo tcp 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@673 -- # echo 4420 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@674 -- # echo ipv4 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@677 -- # ln -s /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn /sys/kernel/config/nvmet/ports/1/subsystems/ 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@680 -- # nvme discover --hostnqn=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 --hostid=80aaeb9f-0274-ea11-906e-0017a4403562 -a 10.0.0.1 -t tcp -s 4420 00:29:53.044 00:29:53.044 Discovery Log Number of Records 2, Generation counter 2 00:29:53.044 =====Discovery Log Entry 0====== 00:29:53.044 trtype: tcp 00:29:53.044 adrfam: ipv4 00:29:53.044 subtype: current discovery subsystem 00:29:53.044 treq: not specified, sq flow control disable supported 00:29:53.044 portid: 1 00:29:53.044 trsvcid: 4420 00:29:53.044 subnqn: nqn.2014-08.org.nvmexpress.discovery 00:29:53.044 traddr: 10.0.0.1 00:29:53.044 eflags: none 00:29:53.044 sectype: none 00:29:53.044 =====Discovery Log Entry 1====== 00:29:53.044 trtype: tcp 00:29:53.044 adrfam: ipv4 00:29:53.044 subtype: nvme subsystem 00:29:53.044 treq: not specified, sq flow control disable supported 00:29:53.044 portid: 1 00:29:53.044 trsvcid: 4420 00:29:53.044 subnqn: nqn.2016-06.io.spdk:testnqn 00:29:53.044 traddr: 10.0.0.1 00:29:53.044 eflags: none 00:29:53.044 sectype: none 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@66 -- # rabort tcp IPv4 10.0.0.1 4420 nqn.2016-06.io.spdk:testnqn 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@17 -- # local trtype=tcp 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@18 -- # local adrfam=IPv4 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@19 -- # local traddr=10.0.0.1 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@20 -- # local trsvcid=4420 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@21 -- # local subnqn=nqn.2016-06.io.spdk:testnqn 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@23 -- # local qds qd 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@24 -- # local target r 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@26 -- # qds=(4 24 64) 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target=trtype:tcp 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4' 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1' 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420' 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@28 -- # for r in trtype adrfam traddr trsvcid subnqn 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@29 -- # target='trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:53.044 12:15:40 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 4 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:53.044 EAL: No free 2048 kB hugepages reported on node 1 00:29:56.335 Initializing NVMe Controllers 00:29:56.335 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:56.335 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:56.335 Initialization complete. Launching workers. 00:29:56.335 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 27800, failed: 0 00:29:56.335 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 27800, failed to submit 0 00:29:56.335 success 0, unsuccess 27800, failed 0 00:29:56.335 12:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:56.335 12:15:43 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 24 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:56.335 EAL: No free 2048 kB hugepages reported on node 1 00:29:59.629 Initializing NVMe Controllers 00:29:59.629 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:29:59.629 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:29:59.629 Initialization complete. Launching workers. 00:29:59.629 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57715, failed: 0 00:29:59.629 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14546, failed to submit 43169 00:29:59.629 success 0, unsuccess 14546, failed 0 00:29:59.629 12:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@32 -- # for qd in "${qds[@]}" 00:29:59.629 12:15:46 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@34 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/abort -q 64 -w rw -M 50 -o 4096 -r 'trtype:tcp adrfam:IPv4 traddr:10.0.0.1 trsvcid:4420 subnqn:nqn.2016-06.io.spdk:testnqn' 00:29:59.629 EAL: No free 2048 kB hugepages reported on node 1 00:30:02.215 Initializing NVMe Controllers 00:30:02.215 Attached to NVMe over Fabrics controller at 10.0.0.1:4420: nqn.2016-06.io.spdk:testnqn 00:30:02.215 Associating TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 with lcore 0 00:30:02.215 Initialization complete. Launching workers. 00:30:02.215 NS: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) NSID 1 I/O completed: 57187, failed: 0 00:30:02.215 CTRLR: TCP (addr:10.0.0.1 subnqn:nqn.2016-06.io.spdk:testnqn) abort submitted 14266, failed to submit 42921 00:30:02.215 success 0, unsuccess 14266, failed 0 00:30:02.215 12:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- target/abort_qd_sizes.sh@67 -- # clean_kernel_target 00:30:02.215 12:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@684 -- # [[ -e /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn ]] 00:30:02.215 12:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@686 -- # echo 0 00:30:02.215 12:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@688 -- # rm -f /sys/kernel/config/nvmet/ports/1/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:02.215 12:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@689 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn/namespaces/1 00:30:02.215 12:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@690 -- # rmdir /sys/kernel/config/nvmet/ports/1 00:30:02.216 12:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@691 -- # rmdir /sys/kernel/config/nvmet/subsystems/nqn.2016-06.io.spdk:testnqn 00:30:02.216 12:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@693 -- # modules=(/sys/module/nvmet/holders/*) 00:30:02.216 12:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@695 -- # modprobe -r nvmet_tcp nvmet 00:30:02.216 12:15:49 nvmf_abort_qd_sizes.kernel_target_abort -- nvmf/common.sh@698 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh 00:30:04.753 0000:00:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:04.753 0000:00:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:04.753 0000:00:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:04.753 0000:00:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:04.753 0000:00:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:04.753 0000:00:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:04.753 0000:00:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:04.753 0000:00:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:04.753 0000:80:04.7 (8086 2021): ioatdma -> vfio-pci 00:30:04.753 0000:80:04.6 (8086 2021): ioatdma -> vfio-pci 00:30:04.753 0000:80:04.5 (8086 2021): ioatdma -> vfio-pci 00:30:04.753 0000:80:04.4 (8086 2021): ioatdma -> vfio-pci 00:30:04.753 0000:80:04.3 (8086 2021): ioatdma -> vfio-pci 00:30:04.753 0000:80:04.2 (8086 2021): ioatdma -> vfio-pci 00:30:04.753 0000:80:04.1 (8086 2021): ioatdma -> vfio-pci 00:30:04.753 0000:80:04.0 (8086 2021): ioatdma -> vfio-pci 00:30:05.322 0000:5e:00.0 (8086 0a54): nvme -> vfio-pci 00:30:05.322 00:30:05.322 real 0m16.069s 00:30:05.322 user 0m3.918s 00:30:05.322 sys 0m4.754s 00:30:05.322 12:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:05.322 12:15:52 nvmf_abort_qd_sizes.kernel_target_abort -- common/autotest_common.sh@10 -- # set +x 00:30:05.322 ************************************ 00:30:05.322 END TEST kernel_target_abort 00:30:05.322 ************************************ 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@1142 -- # return 0 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- target/abort_qd_sizes.sh@84 -- # nvmftestfini 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- nvmf/common.sh@488 -- # nvmfcleanup 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- nvmf/common.sh@117 -- # sync 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- nvmf/common.sh@119 -- # '[' tcp == tcp ']' 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- nvmf/common.sh@120 -- # set +e 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- nvmf/common.sh@121 -- # for i in {1..20} 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- nvmf/common.sh@122 -- # modprobe -v -r nvme-tcp 00:30:05.322 rmmod nvme_tcp 00:30:05.322 rmmod nvme_fabrics 00:30:05.322 rmmod nvme_keyring 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- nvmf/common.sh@123 -- # modprobe -v -r nvme-fabrics 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- nvmf/common.sh@124 -- # set -e 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- nvmf/common.sh@125 -- # return 0 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- nvmf/common.sh@489 -- # '[' -n 516508 ']' 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- nvmf/common.sh@490 -- # killprocess 516508 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@948 -- # '[' -z 516508 ']' 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@952 -- # kill -0 516508 00:30:05.322 /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/common/autotest_common.sh: line 952: kill: (516508) - No such process 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- common/autotest_common.sh@975 -- # echo 'Process with pid 516508 is not found' 00:30:05.322 Process with pid 516508 is not found 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- nvmf/common.sh@492 -- # '[' iso == iso ']' 00:30:05.322 12:15:52 nvmf_abort_qd_sizes -- nvmf/common.sh@493 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/setup.sh reset 00:30:07.858 Waiting for block devices as requested 00:30:07.858 0000:5e:00.0 (8086 0a54): vfio-pci -> nvme 00:30:08.117 0000:00:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:08.117 0000:00:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:08.117 0000:00:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:08.117 0000:00:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:08.377 0000:00:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:08.377 0000:00:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:08.377 0000:00:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:08.377 0000:00:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:08.636 0000:80:04.7 (8086 2021): vfio-pci -> ioatdma 00:30:08.636 0000:80:04.6 (8086 2021): vfio-pci -> ioatdma 00:30:08.636 0000:80:04.5 (8086 2021): vfio-pci -> ioatdma 00:30:08.895 0000:80:04.4 (8086 2021): vfio-pci -> ioatdma 00:30:08.895 0000:80:04.3 (8086 2021): vfio-pci -> ioatdma 00:30:08.895 0000:80:04.2 (8086 2021): vfio-pci -> ioatdma 00:30:08.895 0000:80:04.1 (8086 2021): vfio-pci -> ioatdma 00:30:09.154 0000:80:04.0 (8086 2021): vfio-pci -> ioatdma 00:30:09.154 12:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@495 -- # [[ tcp == \t\c\p ]] 00:30:09.154 12:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@496 -- # nvmf_tcp_fini 00:30:09.154 12:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@274 -- # [[ cvl_0_0_ns_spdk == \n\v\m\f\_\t\g\t\_\n\s ]] 00:30:09.154 12:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@278 -- # remove_spdk_ns 00:30:09.154 12:15:56 nvmf_abort_qd_sizes -- nvmf/common.sh@628 -- # xtrace_disable_per_cmd _remove_spdk_ns 00:30:09.154 12:15:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # eval '_remove_spdk_ns 13> /dev/null' 00:30:09.154 12:15:56 nvmf_abort_qd_sizes -- common/autotest_common.sh@22 -- # _remove_spdk_ns 00:30:11.061 12:15:58 nvmf_abort_qd_sizes -- nvmf/common.sh@279 -- # ip -4 addr flush cvl_0_1 00:30:11.061 00:30:11.061 real 0m46.088s 00:30:11.061 user 1m4.002s 00:30:11.061 sys 0m14.952s 00:30:11.061 12:15:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:11.061 12:15:58 nvmf_abort_qd_sizes -- common/autotest_common.sh@10 -- # set +x 00:30:11.061 ************************************ 00:30:11.061 END TEST nvmf_abort_qd_sizes 00:30:11.061 ************************************ 00:30:11.321 12:15:58 -- common/autotest_common.sh@1142 -- # return 0 00:30:11.321 12:15:58 -- spdk/autotest.sh@295 -- # run_test keyring_file /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:11.321 12:15:58 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:11.321 12:15:58 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:11.321 12:15:58 -- common/autotest_common.sh@10 -- # set +x 00:30:11.321 ************************************ 00:30:11.321 START TEST keyring_file 00:30:11.321 ************************************ 00:30:11.321 12:15:58 keyring_file -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/file.sh 00:30:11.321 * Looking for test storage... 00:30:11.321 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:11.321 12:15:58 keyring_file -- keyring/file.sh@11 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:11.321 12:15:58 keyring_file -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:11.321 12:15:58 keyring_file -- nvmf/common.sh@7 -- # uname -s 00:30:11.321 12:15:58 keyring_file -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:11.321 12:15:58 keyring_file -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:11.321 12:15:58 keyring_file -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:11.321 12:15:58 keyring_file -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:11.321 12:15:58 keyring_file -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:11.321 12:15:58 keyring_file -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:11.321 12:15:58 keyring_file -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:11.321 12:15:58 keyring_file -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:11.322 12:15:58 keyring_file -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:11.322 12:15:58 keyring_file -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:11.322 12:15:58 keyring_file -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:11.322 12:15:58 keyring_file -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.322 12:15:58 keyring_file -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.322 12:15:58 keyring_file -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.322 12:15:58 keyring_file -- paths/export.sh@5 -- # export PATH 00:30:11.322 12:15:58 keyring_file -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@47 -- # : 0 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:11.322 12:15:58 keyring_file -- keyring/file.sh@13 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:11.322 12:15:58 keyring_file -- keyring/file.sh@14 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:11.322 12:15:58 keyring_file -- keyring/file.sh@15 -- # key0=00112233445566778899aabbccddeeff 00:30:11.322 12:15:58 keyring_file -- keyring/file.sh@16 -- # key1=112233445566778899aabbccddeeff00 00:30:11.322 12:15:58 keyring_file -- keyring/file.sh@24 -- # trap cleanup EXIT 00:30:11.322 12:15:58 keyring_file -- keyring/file.sh@26 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.nb7Z6Qj8HJ 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.nb7Z6Qj8HJ 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.nb7Z6Qj8HJ 00:30:11.322 12:15:58 keyring_file -- keyring/file.sh@26 -- # key0path=/tmp/tmp.nb7Z6Qj8HJ 00:30:11.322 12:15:58 keyring_file -- keyring/file.sh@27 -- # prep_key key1 112233445566778899aabbccddeeff00 0 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@17 -- # name=key1 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.vUNFoned13 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:11.322 12:15:58 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.vUNFoned13 00:30:11.322 12:15:58 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.vUNFoned13 00:30:11.322 12:15:58 keyring_file -- keyring/file.sh@27 -- # key1path=/tmp/tmp.vUNFoned13 00:30:11.322 12:15:58 keyring_file -- keyring/file.sh@30 -- # tgtpid=525080 00:30:11.322 12:15:58 keyring_file -- keyring/file.sh@32 -- # waitforlisten 525080 00:30:11.322 12:15:58 keyring_file -- keyring/file.sh@29 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:11.322 12:15:58 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 525080 ']' 00:30:11.322 12:15:58 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:11.322 12:15:58 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:11.322 12:15:58 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:11.322 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:11.322 12:15:58 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:11.322 12:15:58 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:11.581 [2024-07-25 12:15:58.595772] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:30:11.581 [2024-07-25 12:15:58.595822] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525080 ] 00:30:11.581 EAL: No free 2048 kB hugepages reported on node 1 00:30:11.581 [2024-07-25 12:15:58.649725] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:11.581 [2024-07-25 12:15:58.729715] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:12.148 12:15:59 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:12.148 12:15:59 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:12.148 12:15:59 keyring_file -- keyring/file.sh@33 -- # rpc_cmd 00:30:12.148 12:15:59 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.148 12:15:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:12.148 [2024-07-25 12:15:59.395791] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:12.408 null0 00:30:12.408 [2024-07-25 12:15:59.427844] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:12.408 [2024-07-25 12:15:59.428032] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:12.408 [2024-07-25 12:15:59.435846] tcp.c:3725:nvmf_tcp_subsystem_add_host: *WARNING*: nvmf_tcp_psk_path: deprecated feature PSK path to be removed in v24.09 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:12.408 12:15:59 keyring_file -- keyring/file.sh@43 -- # NOT rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@636 -- # local arg=rpc_cmd 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@640 -- # type -t rpc_cmd 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@651 -- # rpc_cmd nvmf_subsystem_add_listener -t tcp -a 127.0.0.1 -s 4420 nqn.2016-06.io.spdk:cnode0 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:12.408 [2024-07-25 12:15:59.447878] nvmf_rpc.c: 788:nvmf_rpc_listen_paused: *ERROR*: Listener already exists 00:30:12.408 request: 00:30:12.408 { 00:30:12.408 "nqn": "nqn.2016-06.io.spdk:cnode0", 00:30:12.408 "secure_channel": false, 00:30:12.408 "listen_address": { 00:30:12.408 "trtype": "tcp", 00:30:12.408 "traddr": "127.0.0.1", 00:30:12.408 "trsvcid": "4420" 00:30:12.408 }, 00:30:12.408 "method": "nvmf_subsystem_add_listener", 00:30:12.408 "req_id": 1 00:30:12.408 } 00:30:12.408 Got JSON-RPC error response 00:30:12.408 response: 00:30:12.408 { 00:30:12.408 "code": -32602, 00:30:12.408 "message": "Invalid parameters" 00:30:12.408 } 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@587 -- # [[ 1 == 0 ]] 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:12.408 12:15:59 keyring_file -- keyring/file.sh@46 -- # bperfpid=525094 00:30:12.408 12:15:59 keyring_file -- keyring/file.sh@48 -- # waitforlisten 525094 /var/tmp/bperf.sock 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 525094 ']' 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:12.408 12:15:59 keyring_file -- keyring/file.sh@45 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:12.408 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:12.408 12:15:59 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:12.408 [2024-07-25 12:15:59.483973] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:30:12.408 [2024-07-25 12:15:59.484019] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid525094 ] 00:30:12.408 EAL: No free 2048 kB hugepages reported on node 1 00:30:12.408 [2024-07-25 12:15:59.537074] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:12.408 [2024-07-25 12:15:59.614656] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:13.344 12:16:00 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:13.344 12:16:00 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:13.344 12:16:00 keyring_file -- keyring/file.sh@49 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nb7Z6Qj8HJ 00:30:13.344 12:16:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nb7Z6Qj8HJ 00:30:13.344 12:16:00 keyring_file -- keyring/file.sh@50 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.vUNFoned13 00:30:13.344 12:16:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.vUNFoned13 00:30:13.604 12:16:00 keyring_file -- keyring/file.sh@51 -- # get_key key0 00:30:13.604 12:16:00 keyring_file -- keyring/file.sh@51 -- # jq -r .path 00:30:13.604 12:16:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:13.604 12:16:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:13.604 12:16:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:13.604 12:16:00 keyring_file -- keyring/file.sh@51 -- # [[ /tmp/tmp.nb7Z6Qj8HJ == \/\t\m\p\/\t\m\p\.\n\b\7\Z\6\Q\j\8\H\J ]] 00:30:13.604 12:16:00 keyring_file -- keyring/file.sh@52 -- # get_key key1 00:30:13.604 12:16:00 keyring_file -- keyring/file.sh@52 -- # jq -r .path 00:30:13.604 12:16:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:13.604 12:16:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:13.604 12:16:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:13.863 12:16:00 keyring_file -- keyring/file.sh@52 -- # [[ /tmp/tmp.vUNFoned13 == \/\t\m\p\/\t\m\p\.\v\U\N\F\o\n\e\d\1\3 ]] 00:30:13.863 12:16:00 keyring_file -- keyring/file.sh@53 -- # get_refcnt key0 00:30:13.863 12:16:00 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:13.863 12:16:00 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:13.863 12:16:00 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:13.863 12:16:00 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:13.863 12:16:00 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:14.122 12:16:01 keyring_file -- keyring/file.sh@53 -- # (( 1 == 1 )) 00:30:14.122 12:16:01 keyring_file -- keyring/file.sh@54 -- # get_refcnt key1 00:30:14.122 12:16:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:14.122 12:16:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:14.122 12:16:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:14.122 12:16:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:14.122 12:16:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:14.122 12:16:01 keyring_file -- keyring/file.sh@54 -- # (( 1 == 1 )) 00:30:14.123 12:16:01 keyring_file -- keyring/file.sh@57 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:14.123 12:16:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:14.382 [2024-07-25 12:16:01.500576] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:14.382 nvme0n1 00:30:14.382 12:16:01 keyring_file -- keyring/file.sh@59 -- # get_refcnt key0 00:30:14.382 12:16:01 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:14.382 12:16:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:14.382 12:16:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:14.382 12:16:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:14.382 12:16:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:14.641 12:16:01 keyring_file -- keyring/file.sh@59 -- # (( 2 == 2 )) 00:30:14.641 12:16:01 keyring_file -- keyring/file.sh@60 -- # get_refcnt key1 00:30:14.641 12:16:01 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:14.641 12:16:01 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:14.641 12:16:01 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:14.641 12:16:01 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:14.641 12:16:01 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:14.900 12:16:01 keyring_file -- keyring/file.sh@60 -- # (( 1 == 1 )) 00:30:14.900 12:16:01 keyring_file -- keyring/file.sh@62 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:14.900 Running I/O for 1 seconds... 00:30:15.838 00:30:15.838 Latency(us) 00:30:15.838 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:15.838 Job: nvme0n1 (Core Mask 0x2, workload: randrw, percentage: 50, depth: 128, IO size: 4096) 00:30:15.838 nvme0n1 : 1.03 3191.14 12.47 0.00 0.00 39707.56 10314.80 63826.37 00:30:15.838 =================================================================================================================== 00:30:15.838 Total : 3191.14 12.47 0.00 0.00 39707.56 10314.80 63826.37 00:30:15.838 0 00:30:15.838 12:16:03 keyring_file -- keyring/file.sh@64 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:15.838 12:16:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:16.097 12:16:03 keyring_file -- keyring/file.sh@65 -- # get_refcnt key0 00:30:16.097 12:16:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:16.097 12:16:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:16.097 12:16:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:16.097 12:16:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:16.097 12:16:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:16.356 12:16:03 keyring_file -- keyring/file.sh@65 -- # (( 1 == 1 )) 00:30:16.356 12:16:03 keyring_file -- keyring/file.sh@66 -- # get_refcnt key1 00:30:16.356 12:16:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:16.356 12:16:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:16.356 12:16:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:16.356 12:16:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:16.356 12:16:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:16.616 12:16:03 keyring_file -- keyring/file.sh@66 -- # (( 1 == 1 )) 00:30:16.616 12:16:03 keyring_file -- keyring/file.sh@69 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:16.616 12:16:03 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:16.616 12:16:03 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:16.616 12:16:03 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:16.616 12:16:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:16.616 12:16:03 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:16.616 12:16:03 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:16.616 12:16:03 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:16.616 12:16:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key1 00:30:16.616 [2024-07-25 12:16:03.786961] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:16.616 [2024-07-25 12:16:03.787521] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161b820 (107): Transport endpoint is not connected 00:30:16.616 [2024-07-25 12:16:03.788516] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0x161b820 (9): Bad file descriptor 00:30:16.616 [2024-07-25 12:16:03.789515] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:16.616 [2024-07-25 12:16:03.789526] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:16.616 [2024-07-25 12:16:03.789533] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:16.616 request: 00:30:16.616 { 00:30:16.616 "name": "nvme0", 00:30:16.616 "trtype": "tcp", 00:30:16.616 "traddr": "127.0.0.1", 00:30:16.616 "adrfam": "ipv4", 00:30:16.616 "trsvcid": "4420", 00:30:16.616 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:16.616 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:16.616 "prchk_reftag": false, 00:30:16.616 "prchk_guard": false, 00:30:16.616 "hdgst": false, 00:30:16.616 "ddgst": false, 00:30:16.616 "psk": "key1", 00:30:16.616 "method": "bdev_nvme_attach_controller", 00:30:16.616 "req_id": 1 00:30:16.616 } 00:30:16.616 Got JSON-RPC error response 00:30:16.616 response: 00:30:16.616 { 00:30:16.616 "code": -5, 00:30:16.616 "message": "Input/output error" 00:30:16.616 } 00:30:16.616 12:16:03 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:16.616 12:16:03 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:16.616 12:16:03 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:16.616 12:16:03 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:16.616 12:16:03 keyring_file -- keyring/file.sh@71 -- # get_refcnt key0 00:30:16.616 12:16:03 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:16.616 12:16:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:16.616 12:16:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:16.616 12:16:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:16.616 12:16:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:16.877 12:16:03 keyring_file -- keyring/file.sh@71 -- # (( 1 == 1 )) 00:30:16.877 12:16:03 keyring_file -- keyring/file.sh@72 -- # get_refcnt key1 00:30:16.877 12:16:03 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:16.877 12:16:03 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:16.877 12:16:03 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:16.877 12:16:03 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:16.877 12:16:03 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:17.136 12:16:04 keyring_file -- keyring/file.sh@72 -- # (( 1 == 1 )) 00:30:17.136 12:16:04 keyring_file -- keyring/file.sh@75 -- # bperf_cmd keyring_file_remove_key key0 00:30:17.136 12:16:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:17.136 12:16:04 keyring_file -- keyring/file.sh@76 -- # bperf_cmd keyring_file_remove_key key1 00:30:17.136 12:16:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key1 00:30:17.396 12:16:04 keyring_file -- keyring/file.sh@77 -- # bperf_cmd keyring_get_keys 00:30:17.396 12:16:04 keyring_file -- keyring/file.sh@77 -- # jq length 00:30:17.396 12:16:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:17.657 12:16:04 keyring_file -- keyring/file.sh@77 -- # (( 0 == 0 )) 00:30:17.657 12:16:04 keyring_file -- keyring/file.sh@80 -- # chmod 0660 /tmp/tmp.nb7Z6Qj8HJ 00:30:17.657 12:16:04 keyring_file -- keyring/file.sh@81 -- # NOT bperf_cmd keyring_file_add_key key0 /tmp/tmp.nb7Z6Qj8HJ 00:30:17.657 12:16:04 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:17.657 12:16:04 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd keyring_file_add_key key0 /tmp/tmp.nb7Z6Qj8HJ 00:30:17.657 12:16:04 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:17.657 12:16:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:17.657 12:16:04 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:17.657 12:16:04 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:17.657 12:16:04 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nb7Z6Qj8HJ 00:30:17.657 12:16:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nb7Z6Qj8HJ 00:30:17.657 [2024-07-25 12:16:04.867464] keyring.c: 34:keyring_file_check_path: *ERROR*: Invalid permissions for key file '/tmp/tmp.nb7Z6Qj8HJ': 0100660 00:30:17.657 [2024-07-25 12:16:04.867487] keyring.c: 126:spdk_keyring_add_key: *ERROR*: Failed to add key 'key0' to the keyring 00:30:17.657 request: 00:30:17.657 { 00:30:17.657 "name": "key0", 00:30:17.657 "path": "/tmp/tmp.nb7Z6Qj8HJ", 00:30:17.657 "method": "keyring_file_add_key", 00:30:17.657 "req_id": 1 00:30:17.657 } 00:30:17.657 Got JSON-RPC error response 00:30:17.657 response: 00:30:17.657 { 00:30:17.657 "code": -1, 00:30:17.657 "message": "Operation not permitted" 00:30:17.657 } 00:30:17.657 12:16:04 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:17.657 12:16:04 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:17.657 12:16:04 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:17.657 12:16:04 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:17.657 12:16:04 keyring_file -- keyring/file.sh@84 -- # chmod 0600 /tmp/tmp.nb7Z6Qj8HJ 00:30:17.657 12:16:04 keyring_file -- keyring/file.sh@85 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.nb7Z6Qj8HJ 00:30:17.657 12:16:04 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.nb7Z6Qj8HJ 00:30:17.917 12:16:05 keyring_file -- keyring/file.sh@86 -- # rm -f /tmp/tmp.nb7Z6Qj8HJ 00:30:17.917 12:16:05 keyring_file -- keyring/file.sh@88 -- # get_refcnt key0 00:30:17.917 12:16:05 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:17.917 12:16:05 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:17.917 12:16:05 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:17.917 12:16:05 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:17.917 12:16:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:18.176 12:16:05 keyring_file -- keyring/file.sh@88 -- # (( 1 == 1 )) 00:30:18.176 12:16:05 keyring_file -- keyring/file.sh@90 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:18.176 12:16:05 keyring_file -- common/autotest_common.sh@648 -- # local es=0 00:30:18.176 12:16:05 keyring_file -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:18.176 12:16:05 keyring_file -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:18.176 12:16:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:18.176 12:16:05 keyring_file -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:18.176 12:16:05 keyring_file -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:18.176 12:16:05 keyring_file -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:18.176 12:16:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:18.176 [2024-07-25 12:16:05.384855] keyring.c: 29:keyring_file_check_path: *ERROR*: Could not stat key file '/tmp/tmp.nb7Z6Qj8HJ': No such file or directory 00:30:18.176 [2024-07-25 12:16:05.384877] nvme_tcp.c:2582:nvme_tcp_generate_tls_credentials: *ERROR*: Failed to obtain key 'key0': No such file or directory 00:30:18.176 [2024-07-25 12:16:05.384896] nvme.c: 683:nvme_ctrlr_probe: *ERROR*: Failed to construct NVMe controller for SSD: 127.0.0.1 00:30:18.176 [2024-07-25 12:16:05.384903] nvme.c: 830:nvme_probe_internal: *ERROR*: NVMe ctrlr scan failed 00:30:18.176 [2024-07-25 12:16:05.384909] bdev_nvme.c:6296:bdev_nvme_create: *ERROR*: No controller was found with provided trid (traddr: 127.0.0.1) 00:30:18.176 request: 00:30:18.176 { 00:30:18.176 "name": "nvme0", 00:30:18.176 "trtype": "tcp", 00:30:18.176 "traddr": "127.0.0.1", 00:30:18.176 "adrfam": "ipv4", 00:30:18.176 "trsvcid": "4420", 00:30:18.176 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:18.176 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:18.176 "prchk_reftag": false, 00:30:18.176 "prchk_guard": false, 00:30:18.176 "hdgst": false, 00:30:18.176 "ddgst": false, 00:30:18.176 "psk": "key0", 00:30:18.176 "method": "bdev_nvme_attach_controller", 00:30:18.176 "req_id": 1 00:30:18.176 } 00:30:18.176 Got JSON-RPC error response 00:30:18.176 response: 00:30:18.176 { 00:30:18.176 "code": -19, 00:30:18.176 "message": "No such device" 00:30:18.176 } 00:30:18.176 12:16:05 keyring_file -- common/autotest_common.sh@651 -- # es=1 00:30:18.176 12:16:05 keyring_file -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:18.176 12:16:05 keyring_file -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:18.176 12:16:05 keyring_file -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:18.176 12:16:05 keyring_file -- keyring/file.sh@92 -- # bperf_cmd keyring_file_remove_key key0 00:30:18.176 12:16:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:18.436 12:16:05 keyring_file -- keyring/file.sh@95 -- # prep_key key0 00112233445566778899aabbccddeeff 0 00:30:18.436 12:16:05 keyring_file -- keyring/common.sh@15 -- # local name key digest path 00:30:18.436 12:16:05 keyring_file -- keyring/common.sh@17 -- # name=key0 00:30:18.436 12:16:05 keyring_file -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:18.436 12:16:05 keyring_file -- keyring/common.sh@17 -- # digest=0 00:30:18.436 12:16:05 keyring_file -- keyring/common.sh@18 -- # mktemp 00:30:18.436 12:16:05 keyring_file -- keyring/common.sh@18 -- # path=/tmp/tmp.OV0jJxAFrp 00:30:18.436 12:16:05 keyring_file -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:18.436 12:16:05 keyring_file -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:18.436 12:16:05 keyring_file -- nvmf/common.sh@702 -- # local prefix key digest 00:30:18.436 12:16:05 keyring_file -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:18.436 12:16:05 keyring_file -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:18.436 12:16:05 keyring_file -- nvmf/common.sh@704 -- # digest=0 00:30:18.436 12:16:05 keyring_file -- nvmf/common.sh@705 -- # python - 00:30:18.436 12:16:05 keyring_file -- keyring/common.sh@21 -- # chmod 0600 /tmp/tmp.OV0jJxAFrp 00:30:18.436 12:16:05 keyring_file -- keyring/common.sh@23 -- # echo /tmp/tmp.OV0jJxAFrp 00:30:18.436 12:16:05 keyring_file -- keyring/file.sh@95 -- # key0path=/tmp/tmp.OV0jJxAFrp 00:30:18.436 12:16:05 keyring_file -- keyring/file.sh@96 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OV0jJxAFrp 00:30:18.436 12:16:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OV0jJxAFrp 00:30:18.695 12:16:05 keyring_file -- keyring/file.sh@97 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:18.695 12:16:05 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:18.955 nvme0n1 00:30:18.955 12:16:06 keyring_file -- keyring/file.sh@99 -- # get_refcnt key0 00:30:18.955 12:16:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:18.955 12:16:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:18.955 12:16:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:18.955 12:16:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:18.955 12:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:19.214 12:16:06 keyring_file -- keyring/file.sh@99 -- # (( 2 == 2 )) 00:30:19.215 12:16:06 keyring_file -- keyring/file.sh@100 -- # bperf_cmd keyring_file_remove_key key0 00:30:19.215 12:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_remove_key key0 00:30:19.215 12:16:06 keyring_file -- keyring/file.sh@101 -- # get_key key0 00:30:19.215 12:16:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:19.215 12:16:06 keyring_file -- keyring/file.sh@101 -- # jq -r .removed 00:30:19.215 12:16:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:19.215 12:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:19.474 12:16:06 keyring_file -- keyring/file.sh@101 -- # [[ true == \t\r\u\e ]] 00:30:19.474 12:16:06 keyring_file -- keyring/file.sh@102 -- # get_refcnt key0 00:30:19.474 12:16:06 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:19.474 12:16:06 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:19.474 12:16:06 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:19.474 12:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:19.474 12:16:06 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:19.734 12:16:06 keyring_file -- keyring/file.sh@102 -- # (( 1 == 1 )) 00:30:19.734 12:16:06 keyring_file -- keyring/file.sh@103 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:19.734 12:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:19.734 12:16:06 keyring_file -- keyring/file.sh@104 -- # bperf_cmd keyring_get_keys 00:30:19.734 12:16:06 keyring_file -- keyring/file.sh@104 -- # jq length 00:30:19.734 12:16:06 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:19.994 12:16:07 keyring_file -- keyring/file.sh@104 -- # (( 0 == 0 )) 00:30:19.994 12:16:07 keyring_file -- keyring/file.sh@107 -- # bperf_cmd keyring_file_add_key key0 /tmp/tmp.OV0jJxAFrp 00:30:19.994 12:16:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key0 /tmp/tmp.OV0jJxAFrp 00:30:20.255 12:16:07 keyring_file -- keyring/file.sh@108 -- # bperf_cmd keyring_file_add_key key1 /tmp/tmp.vUNFoned13 00:30:20.255 12:16:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_file_add_key key1 /tmp/tmp.vUNFoned13 00:30:20.255 12:16:07 keyring_file -- keyring/file.sh@109 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:20.255 12:16:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk key0 00:30:20.523 nvme0n1 00:30:20.523 12:16:07 keyring_file -- keyring/file.sh@112 -- # bperf_cmd save_config 00:30:20.523 12:16:07 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock save_config 00:30:20.823 12:16:07 keyring_file -- keyring/file.sh@112 -- # config='{ 00:30:20.823 "subsystems": [ 00:30:20.823 { 00:30:20.823 "subsystem": "keyring", 00:30:20.823 "config": [ 00:30:20.823 { 00:30:20.823 "method": "keyring_file_add_key", 00:30:20.823 "params": { 00:30:20.823 "name": "key0", 00:30:20.823 "path": "/tmp/tmp.OV0jJxAFrp" 00:30:20.823 } 00:30:20.823 }, 00:30:20.823 { 00:30:20.823 "method": "keyring_file_add_key", 00:30:20.823 "params": { 00:30:20.823 "name": "key1", 00:30:20.823 "path": "/tmp/tmp.vUNFoned13" 00:30:20.823 } 00:30:20.823 } 00:30:20.823 ] 00:30:20.823 }, 00:30:20.823 { 00:30:20.823 "subsystem": "iobuf", 00:30:20.823 "config": [ 00:30:20.823 { 00:30:20.823 "method": "iobuf_set_options", 00:30:20.823 "params": { 00:30:20.823 "small_pool_count": 8192, 00:30:20.823 "large_pool_count": 1024, 00:30:20.823 "small_bufsize": 8192, 00:30:20.823 "large_bufsize": 135168 00:30:20.823 } 00:30:20.823 } 00:30:20.823 ] 00:30:20.823 }, 00:30:20.823 { 00:30:20.823 "subsystem": "sock", 00:30:20.823 "config": [ 00:30:20.823 { 00:30:20.823 "method": "sock_set_default_impl", 00:30:20.823 "params": { 00:30:20.823 "impl_name": "posix" 00:30:20.823 } 00:30:20.823 }, 00:30:20.823 { 00:30:20.823 "method": "sock_impl_set_options", 00:30:20.823 "params": { 00:30:20.823 "impl_name": "ssl", 00:30:20.823 "recv_buf_size": 4096, 00:30:20.823 "send_buf_size": 4096, 00:30:20.823 "enable_recv_pipe": true, 00:30:20.823 "enable_quickack": false, 00:30:20.823 "enable_placement_id": 0, 00:30:20.823 "enable_zerocopy_send_server": true, 00:30:20.823 "enable_zerocopy_send_client": false, 00:30:20.823 "zerocopy_threshold": 0, 00:30:20.823 "tls_version": 0, 00:30:20.823 "enable_ktls": false 00:30:20.823 } 00:30:20.823 }, 00:30:20.823 { 00:30:20.823 "method": "sock_impl_set_options", 00:30:20.823 "params": { 00:30:20.823 "impl_name": "posix", 00:30:20.823 "recv_buf_size": 2097152, 00:30:20.823 "send_buf_size": 2097152, 00:30:20.823 "enable_recv_pipe": true, 00:30:20.823 "enable_quickack": false, 00:30:20.823 "enable_placement_id": 0, 00:30:20.823 "enable_zerocopy_send_server": true, 00:30:20.823 "enable_zerocopy_send_client": false, 00:30:20.823 "zerocopy_threshold": 0, 00:30:20.823 "tls_version": 0, 00:30:20.823 "enable_ktls": false 00:30:20.823 } 00:30:20.823 } 00:30:20.823 ] 00:30:20.823 }, 00:30:20.823 { 00:30:20.823 "subsystem": "vmd", 00:30:20.823 "config": [] 00:30:20.823 }, 00:30:20.823 { 00:30:20.823 "subsystem": "accel", 00:30:20.823 "config": [ 00:30:20.823 { 00:30:20.823 "method": "accel_set_options", 00:30:20.823 "params": { 00:30:20.823 "small_cache_size": 128, 00:30:20.823 "large_cache_size": 16, 00:30:20.823 "task_count": 2048, 00:30:20.823 "sequence_count": 2048, 00:30:20.823 "buf_count": 2048 00:30:20.823 } 00:30:20.823 } 00:30:20.823 ] 00:30:20.823 }, 00:30:20.823 { 00:30:20.823 "subsystem": "bdev", 00:30:20.823 "config": [ 00:30:20.823 { 00:30:20.823 "method": "bdev_set_options", 00:30:20.823 "params": { 00:30:20.823 "bdev_io_pool_size": 65535, 00:30:20.823 "bdev_io_cache_size": 256, 00:30:20.823 "bdev_auto_examine": true, 00:30:20.823 "iobuf_small_cache_size": 128, 00:30:20.823 "iobuf_large_cache_size": 16 00:30:20.823 } 00:30:20.823 }, 00:30:20.823 { 00:30:20.823 "method": "bdev_raid_set_options", 00:30:20.823 "params": { 00:30:20.823 "process_window_size_kb": 1024, 00:30:20.823 "process_max_bandwidth_mb_sec": 0 00:30:20.823 } 00:30:20.823 }, 00:30:20.823 { 00:30:20.823 "method": "bdev_iscsi_set_options", 00:30:20.823 "params": { 00:30:20.823 "timeout_sec": 30 00:30:20.823 } 00:30:20.823 }, 00:30:20.823 { 00:30:20.823 "method": "bdev_nvme_set_options", 00:30:20.823 "params": { 00:30:20.823 "action_on_timeout": "none", 00:30:20.823 "timeout_us": 0, 00:30:20.823 "timeout_admin_us": 0, 00:30:20.823 "keep_alive_timeout_ms": 10000, 00:30:20.823 "arbitration_burst": 0, 00:30:20.823 "low_priority_weight": 0, 00:30:20.823 "medium_priority_weight": 0, 00:30:20.823 "high_priority_weight": 0, 00:30:20.823 "nvme_adminq_poll_period_us": 10000, 00:30:20.823 "nvme_ioq_poll_period_us": 0, 00:30:20.823 "io_queue_requests": 512, 00:30:20.823 "delay_cmd_submit": true, 00:30:20.823 "transport_retry_count": 4, 00:30:20.823 "bdev_retry_count": 3, 00:30:20.823 "transport_ack_timeout": 0, 00:30:20.823 "ctrlr_loss_timeout_sec": 0, 00:30:20.823 "reconnect_delay_sec": 0, 00:30:20.823 "fast_io_fail_timeout_sec": 0, 00:30:20.823 "disable_auto_failback": false, 00:30:20.823 "generate_uuids": false, 00:30:20.823 "transport_tos": 0, 00:30:20.823 "nvme_error_stat": false, 00:30:20.823 "rdma_srq_size": 0, 00:30:20.823 "io_path_stat": false, 00:30:20.823 "allow_accel_sequence": false, 00:30:20.823 "rdma_max_cq_size": 0, 00:30:20.823 "rdma_cm_event_timeout_ms": 0, 00:30:20.823 "dhchap_digests": [ 00:30:20.823 "sha256", 00:30:20.823 "sha384", 00:30:20.823 "sha512" 00:30:20.823 ], 00:30:20.823 "dhchap_dhgroups": [ 00:30:20.823 "null", 00:30:20.823 "ffdhe2048", 00:30:20.823 "ffdhe3072", 00:30:20.823 "ffdhe4096", 00:30:20.823 "ffdhe6144", 00:30:20.823 "ffdhe8192" 00:30:20.823 ] 00:30:20.823 } 00:30:20.823 }, 00:30:20.823 { 00:30:20.823 "method": "bdev_nvme_attach_controller", 00:30:20.823 "params": { 00:30:20.823 "name": "nvme0", 00:30:20.823 "trtype": "TCP", 00:30:20.823 "adrfam": "IPv4", 00:30:20.823 "traddr": "127.0.0.1", 00:30:20.823 "trsvcid": "4420", 00:30:20.823 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:20.823 "prchk_reftag": false, 00:30:20.823 "prchk_guard": false, 00:30:20.823 "ctrlr_loss_timeout_sec": 0, 00:30:20.823 "reconnect_delay_sec": 0, 00:30:20.823 "fast_io_fail_timeout_sec": 0, 00:30:20.823 "psk": "key0", 00:30:20.823 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:20.823 "hdgst": false, 00:30:20.823 "ddgst": false 00:30:20.823 } 00:30:20.823 }, 00:30:20.823 { 00:30:20.823 "method": "bdev_nvme_set_hotplug", 00:30:20.823 "params": { 00:30:20.823 "period_us": 100000, 00:30:20.823 "enable": false 00:30:20.824 } 00:30:20.824 }, 00:30:20.824 { 00:30:20.824 "method": "bdev_wait_for_examine" 00:30:20.824 } 00:30:20.824 ] 00:30:20.824 }, 00:30:20.824 { 00:30:20.824 "subsystem": "nbd", 00:30:20.824 "config": [] 00:30:20.824 } 00:30:20.824 ] 00:30:20.824 }' 00:30:20.824 12:16:07 keyring_file -- keyring/file.sh@114 -- # killprocess 525094 00:30:20.824 12:16:07 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 525094 ']' 00:30:20.824 12:16:07 keyring_file -- common/autotest_common.sh@952 -- # kill -0 525094 00:30:20.824 12:16:07 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:20.824 12:16:07 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:20.824 12:16:07 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 525094 00:30:20.824 12:16:07 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:20.824 12:16:07 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:20.824 12:16:07 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 525094' 00:30:20.824 killing process with pid 525094 00:30:20.824 12:16:07 keyring_file -- common/autotest_common.sh@967 -- # kill 525094 00:30:20.824 Received shutdown signal, test time was about 1.000000 seconds 00:30:20.824 00:30:20.824 Latency(us) 00:30:20.824 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:20.824 =================================================================================================================== 00:30:20.824 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:20.824 12:16:07 keyring_file -- common/autotest_common.sh@972 -- # wait 525094 00:30:21.084 12:16:08 keyring_file -- keyring/file.sh@117 -- # bperfpid=526633 00:30:21.084 12:16:08 keyring_file -- keyring/file.sh@119 -- # waitforlisten 526633 /var/tmp/bperf.sock 00:30:21.084 12:16:08 keyring_file -- common/autotest_common.sh@829 -- # '[' -z 526633 ']' 00:30:21.084 12:16:08 keyring_file -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:21.084 12:16:08 keyring_file -- keyring/file.sh@115 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randrw -M 50 -t 1 -m 2 -r /var/tmp/bperf.sock -z -c /dev/fd/63 00:30:21.084 12:16:08 keyring_file -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:21.084 12:16:08 keyring_file -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:21.084 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:21.084 12:16:08 keyring_file -- keyring/file.sh@115 -- # echo '{ 00:30:21.084 "subsystems": [ 00:30:21.084 { 00:30:21.084 "subsystem": "keyring", 00:30:21.084 "config": [ 00:30:21.084 { 00:30:21.084 "method": "keyring_file_add_key", 00:30:21.084 "params": { 00:30:21.084 "name": "key0", 00:30:21.084 "path": "/tmp/tmp.OV0jJxAFrp" 00:30:21.084 } 00:30:21.084 }, 00:30:21.084 { 00:30:21.084 "method": "keyring_file_add_key", 00:30:21.084 "params": { 00:30:21.084 "name": "key1", 00:30:21.084 "path": "/tmp/tmp.vUNFoned13" 00:30:21.084 } 00:30:21.084 } 00:30:21.084 ] 00:30:21.084 }, 00:30:21.084 { 00:30:21.084 "subsystem": "iobuf", 00:30:21.084 "config": [ 00:30:21.084 { 00:30:21.084 "method": "iobuf_set_options", 00:30:21.084 "params": { 00:30:21.084 "small_pool_count": 8192, 00:30:21.084 "large_pool_count": 1024, 00:30:21.084 "small_bufsize": 8192, 00:30:21.084 "large_bufsize": 135168 00:30:21.084 } 00:30:21.084 } 00:30:21.084 ] 00:30:21.084 }, 00:30:21.084 { 00:30:21.084 "subsystem": "sock", 00:30:21.084 "config": [ 00:30:21.084 { 00:30:21.084 "method": "sock_set_default_impl", 00:30:21.084 "params": { 00:30:21.084 "impl_name": "posix" 00:30:21.084 } 00:30:21.084 }, 00:30:21.084 { 00:30:21.084 "method": "sock_impl_set_options", 00:30:21.085 "params": { 00:30:21.085 "impl_name": "ssl", 00:30:21.085 "recv_buf_size": 4096, 00:30:21.085 "send_buf_size": 4096, 00:30:21.085 "enable_recv_pipe": true, 00:30:21.085 "enable_quickack": false, 00:30:21.085 "enable_placement_id": 0, 00:30:21.085 "enable_zerocopy_send_server": true, 00:30:21.085 "enable_zerocopy_send_client": false, 00:30:21.085 "zerocopy_threshold": 0, 00:30:21.085 "tls_version": 0, 00:30:21.085 "enable_ktls": false 00:30:21.085 } 00:30:21.085 }, 00:30:21.085 { 00:30:21.085 "method": "sock_impl_set_options", 00:30:21.085 "params": { 00:30:21.085 "impl_name": "posix", 00:30:21.085 "recv_buf_size": 2097152, 00:30:21.085 "send_buf_size": 2097152, 00:30:21.085 "enable_recv_pipe": true, 00:30:21.085 "enable_quickack": false, 00:30:21.085 "enable_placement_id": 0, 00:30:21.085 "enable_zerocopy_send_server": true, 00:30:21.085 "enable_zerocopy_send_client": false, 00:30:21.085 "zerocopy_threshold": 0, 00:30:21.085 "tls_version": 0, 00:30:21.085 "enable_ktls": false 00:30:21.085 } 00:30:21.085 } 00:30:21.085 ] 00:30:21.085 }, 00:30:21.085 { 00:30:21.085 "subsystem": "vmd", 00:30:21.085 "config": [] 00:30:21.085 }, 00:30:21.085 { 00:30:21.085 "subsystem": "accel", 00:30:21.085 "config": [ 00:30:21.085 { 00:30:21.085 "method": "accel_set_options", 00:30:21.085 "params": { 00:30:21.085 "small_cache_size": 128, 00:30:21.085 "large_cache_size": 16, 00:30:21.085 "task_count": 2048, 00:30:21.085 "sequence_count": 2048, 00:30:21.085 "buf_count": 2048 00:30:21.085 } 00:30:21.085 } 00:30:21.085 ] 00:30:21.085 }, 00:30:21.085 { 00:30:21.085 "subsystem": "bdev", 00:30:21.085 "config": [ 00:30:21.085 { 00:30:21.085 "method": "bdev_set_options", 00:30:21.085 "params": { 00:30:21.085 "bdev_io_pool_size": 65535, 00:30:21.085 "bdev_io_cache_size": 256, 00:30:21.085 "bdev_auto_examine": true, 00:30:21.085 "iobuf_small_cache_size": 128, 00:30:21.085 "iobuf_large_cache_size": 16 00:30:21.085 } 00:30:21.085 }, 00:30:21.085 { 00:30:21.085 "method": "bdev_raid_set_options", 00:30:21.085 "params": { 00:30:21.085 "process_window_size_kb": 1024, 00:30:21.085 "process_max_bandwidth_mb_sec": 0 00:30:21.085 } 00:30:21.085 }, 00:30:21.085 { 00:30:21.085 "method": "bdev_iscsi_set_options", 00:30:21.085 "params": { 00:30:21.085 "timeout_sec": 30 00:30:21.085 } 00:30:21.085 }, 00:30:21.085 { 00:30:21.085 "method": "bdev_nvme_set_options", 00:30:21.085 "params": { 00:30:21.085 "action_on_timeout": "none", 00:30:21.085 "timeout_us": 0, 00:30:21.085 "timeout_admin_us": 0, 00:30:21.085 "keep_alive_timeout_ms": 10000, 00:30:21.085 "arbitration_burst": 0, 00:30:21.085 "low_priority_weight": 0, 00:30:21.085 "medium_priority_weight": 0, 00:30:21.085 "high_priority_weight": 0, 00:30:21.085 "nvme_adminq_poll_period_us": 10000, 00:30:21.085 "nvme_ioq_poll_period_us": 0, 00:30:21.085 "io_queue_requests": 512, 00:30:21.085 "delay_cmd_submit": true, 00:30:21.085 "transport_retry_count": 4, 00:30:21.085 "bdev_retry_count": 3, 00:30:21.085 "transport_ack_timeout": 0, 00:30:21.085 "ctrlr_loss_timeout_sec": 0, 00:30:21.085 "reconnect_delay_sec": 0, 00:30:21.085 "fast_io_fail_timeout_sec": 0, 00:30:21.085 "disable_auto_failback": false, 00:30:21.085 "generate_uuids": false, 00:30:21.085 "transport_tos": 0, 00:30:21.085 "nvme_error_stat": false, 00:30:21.085 "rdma_srq_size": 0, 00:30:21.085 "io_path_stat": false, 00:30:21.085 "allow_accel_sequence": false, 00:30:21.085 "rdma_max_cq_size": 0, 00:30:21.085 "rdma_cm_event_timeout_ms": 0, 00:30:21.085 "dhchap_digests": [ 00:30:21.085 "sha256", 00:30:21.085 "sha384", 00:30:21.085 "sha512" 00:30:21.085 ], 00:30:21.085 "dhchap_dhgroups": [ 00:30:21.085 "null", 00:30:21.085 "ffdhe2048", 00:30:21.085 "ffdhe3072", 00:30:21.085 "ffdhe4096", 00:30:21.085 "ffdhe6144", 00:30:21.085 "ffdhe8192" 00:30:21.085 ] 00:30:21.085 } 00:30:21.085 }, 00:30:21.085 { 00:30:21.085 "method": "bdev_nvme_attach_controller", 00:30:21.085 "params": { 00:30:21.085 "name": "nvme0", 00:30:21.085 "trtype": "TCP", 00:30:21.085 "adrfam": "IPv4", 00:30:21.085 "traddr": "127.0.0.1", 00:30:21.085 "trsvcid": "4420", 00:30:21.085 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:21.085 "prchk_reftag": false, 00:30:21.085 "prchk_guard": false, 00:30:21.085 "ctrlr_loss_timeout_sec": 0, 00:30:21.085 "reconnect_delay_sec": 0, 00:30:21.085 "fast_io_fail_timeout_sec": 0, 00:30:21.085 "psk": "key0", 00:30:21.085 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:21.085 "hdgst": false, 00:30:21.085 "ddgst": false 00:30:21.085 } 00:30:21.085 }, 00:30:21.085 { 00:30:21.085 "method": "bdev_nvme_set_hotplug", 00:30:21.085 "params": { 00:30:21.085 "period_us": 100000, 00:30:21.085 "enable": false 00:30:21.085 } 00:30:21.085 }, 00:30:21.085 { 00:30:21.085 "method": "bdev_wait_for_examine" 00:30:21.085 } 00:30:21.085 ] 00:30:21.085 }, 00:30:21.085 { 00:30:21.085 "subsystem": "nbd", 00:30:21.085 "config": [] 00:30:21.085 } 00:30:21.085 ] 00:30:21.085 }' 00:30:21.085 12:16:08 keyring_file -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:21.085 12:16:08 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:21.085 [2024-07-25 12:16:08.200249] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:30:21.085 [2024-07-25 12:16:08.200300] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid526633 ] 00:30:21.085 EAL: No free 2048 kB hugepages reported on node 1 00:30:21.085 [2024-07-25 12:16:08.252303] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:21.085 [2024-07-25 12:16:08.324460] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:21.345 [2024-07-25 12:16:08.483858] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:21.914 12:16:09 keyring_file -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:21.914 12:16:09 keyring_file -- common/autotest_common.sh@862 -- # return 0 00:30:21.914 12:16:09 keyring_file -- keyring/file.sh@120 -- # bperf_cmd keyring_get_keys 00:30:21.914 12:16:09 keyring_file -- keyring/file.sh@120 -- # jq length 00:30:21.914 12:16:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:22.174 12:16:09 keyring_file -- keyring/file.sh@120 -- # (( 2 == 2 )) 00:30:22.174 12:16:09 keyring_file -- keyring/file.sh@121 -- # get_refcnt key0 00:30:22.174 12:16:09 keyring_file -- keyring/common.sh@12 -- # get_key key0 00:30:22.174 12:16:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:22.174 12:16:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:22.174 12:16:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key0")' 00:30:22.174 12:16:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:22.174 12:16:09 keyring_file -- keyring/file.sh@121 -- # (( 2 == 2 )) 00:30:22.174 12:16:09 keyring_file -- keyring/file.sh@122 -- # get_refcnt key1 00:30:22.174 12:16:09 keyring_file -- keyring/common.sh@12 -- # get_key key1 00:30:22.174 12:16:09 keyring_file -- keyring/common.sh@12 -- # jq -r .refcnt 00:30:22.174 12:16:09 keyring_file -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:22.174 12:16:09 keyring_file -- keyring/common.sh@10 -- # jq '.[] | select(.name == "key1")' 00:30:22.174 12:16:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:22.434 12:16:09 keyring_file -- keyring/file.sh@122 -- # (( 1 == 1 )) 00:30:22.434 12:16:09 keyring_file -- keyring/file.sh@123 -- # bperf_cmd bdev_nvme_get_controllers 00:30:22.434 12:16:09 keyring_file -- keyring/file.sh@123 -- # jq -r '.[].name' 00:30:22.434 12:16:09 keyring_file -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_get_controllers 00:30:22.693 12:16:09 keyring_file -- keyring/file.sh@123 -- # [[ nvme0 == nvme0 ]] 00:30:22.693 12:16:09 keyring_file -- keyring/file.sh@1 -- # cleanup 00:30:22.693 12:16:09 keyring_file -- keyring/file.sh@19 -- # rm -f /tmp/tmp.OV0jJxAFrp /tmp/tmp.vUNFoned13 00:30:22.693 12:16:09 keyring_file -- keyring/file.sh@20 -- # killprocess 526633 00:30:22.693 12:16:09 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 526633 ']' 00:30:22.693 12:16:09 keyring_file -- common/autotest_common.sh@952 -- # kill -0 526633 00:30:22.693 12:16:09 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:22.693 12:16:09 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:22.693 12:16:09 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 526633 00:30:22.693 12:16:09 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:22.693 12:16:09 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:22.693 12:16:09 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 526633' 00:30:22.693 killing process with pid 526633 00:30:22.693 12:16:09 keyring_file -- common/autotest_common.sh@967 -- # kill 526633 00:30:22.693 Received shutdown signal, test time was about 1.000000 seconds 00:30:22.693 00:30:22.693 Latency(us) 00:30:22.693 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:22.693 =================================================================================================================== 00:30:22.693 Total : 0.00 0.00 0.00 0.00 0.00 18446744073709551616.00 0.00 00:30:22.693 12:16:09 keyring_file -- common/autotest_common.sh@972 -- # wait 526633 00:30:22.952 12:16:09 keyring_file -- keyring/file.sh@21 -- # killprocess 525080 00:30:22.952 12:16:09 keyring_file -- common/autotest_common.sh@948 -- # '[' -z 525080 ']' 00:30:22.952 12:16:09 keyring_file -- common/autotest_common.sh@952 -- # kill -0 525080 00:30:22.952 12:16:09 keyring_file -- common/autotest_common.sh@953 -- # uname 00:30:22.952 12:16:09 keyring_file -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:22.952 12:16:09 keyring_file -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 525080 00:30:22.952 12:16:10 keyring_file -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:22.952 12:16:10 keyring_file -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:22.952 12:16:10 keyring_file -- common/autotest_common.sh@966 -- # echo 'killing process with pid 525080' 00:30:22.952 killing process with pid 525080 00:30:22.952 12:16:10 keyring_file -- common/autotest_common.sh@967 -- # kill 525080 00:30:22.952 [2024-07-25 12:16:10.029275] app.c:1024:log_deprecation_hits: *WARNING*: nvmf_tcp_psk_path: deprecation 'PSK path' scheduled for removal in v24.09 hit 1 times 00:30:22.952 12:16:10 keyring_file -- common/autotest_common.sh@972 -- # wait 525080 00:30:23.213 00:30:23.213 real 0m12.005s 00:30:23.213 user 0m28.061s 00:30:23.213 sys 0m2.675s 00:30:23.213 12:16:10 keyring_file -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:23.213 12:16:10 keyring_file -- common/autotest_common.sh@10 -- # set +x 00:30:23.213 ************************************ 00:30:23.213 END TEST keyring_file 00:30:23.213 ************************************ 00:30:23.213 12:16:10 -- common/autotest_common.sh@1142 -- # return 0 00:30:23.213 12:16:10 -- spdk/autotest.sh@296 -- # [[ y == y ]] 00:30:23.213 12:16:10 -- spdk/autotest.sh@297 -- # run_test keyring_linux /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:23.213 12:16:10 -- common/autotest_common.sh@1099 -- # '[' 2 -le 1 ']' 00:30:23.213 12:16:10 -- common/autotest_common.sh@1105 -- # xtrace_disable 00:30:23.213 12:16:10 -- common/autotest_common.sh@10 -- # set +x 00:30:23.213 ************************************ 00:30:23.213 START TEST keyring_linux 00:30:23.213 ************************************ 00:30:23.213 12:16:10 keyring_linux -- common/autotest_common.sh@1123 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/linux.sh 00:30:23.473 * Looking for test storage... 00:30:23.473 * Found test storage at /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring 00:30:23.473 12:16:10 keyring_linux -- keyring/linux.sh@9 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/keyring/common.sh 00:30:23.473 12:16:10 keyring_linux -- keyring/common.sh@4 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/test/nvmf/common.sh 00:30:23.473 12:16:10 keyring_linux -- nvmf/common.sh@7 -- # uname -s 00:30:23.473 12:16:10 keyring_linux -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:30:23.473 12:16:10 keyring_linux -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:30:23.473 12:16:10 keyring_linux -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:30:23.473 12:16:10 keyring_linux -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:30:23.473 12:16:10 keyring_linux -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:30:23.473 12:16:10 keyring_linux -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:30:23.473 12:16:10 keyring_linux -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:30:23.473 12:16:10 keyring_linux -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:30:23.473 12:16:10 keyring_linux -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:30:23.473 12:16:10 keyring_linux -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:30:23.473 12:16:10 keyring_linux -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:80aaeb9f-0274-ea11-906e-0017a4403562 00:30:23.473 12:16:10 keyring_linux -- nvmf/common.sh@18 -- # NVME_HOSTID=80aaeb9f-0274-ea11-906e-0017a4403562 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@21 -- # NET_TYPE=phy 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@45 -- # source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:30:23.474 12:16:10 keyring_linux -- scripts/common.sh@508 -- # [[ -e /bin/wpdk_common.sh ]] 00:30:23.474 12:16:10 keyring_linux -- scripts/common.sh@516 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:30:23.474 12:16:10 keyring_linux -- scripts/common.sh@517 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:30:23.474 12:16:10 keyring_linux -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.474 12:16:10 keyring_linux -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.474 12:16:10 keyring_linux -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.474 12:16:10 keyring_linux -- paths/export.sh@5 -- # export PATH 00:30:23.474 12:16:10 keyring_linux -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@47 -- # : 0 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@48 -- # export NVMF_APP_SHM_ID 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@49 -- # build_nvmf_app_args 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@33 -- # '[' -n '' ']' 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@35 -- # '[' 0 -eq 1 ']' 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@51 -- # have_pci_nics=0 00:30:23.474 12:16:10 keyring_linux -- keyring/common.sh@6 -- # bperfsock=/var/tmp/bperf.sock 00:30:23.474 12:16:10 keyring_linux -- keyring/linux.sh@11 -- # subnqn=nqn.2016-06.io.spdk:cnode0 00:30:23.474 12:16:10 keyring_linux -- keyring/linux.sh@12 -- # hostnqn=nqn.2016-06.io.spdk:host0 00:30:23.474 12:16:10 keyring_linux -- keyring/linux.sh@13 -- # key0=00112233445566778899aabbccddeeff 00:30:23.474 12:16:10 keyring_linux -- keyring/linux.sh@14 -- # key1=112233445566778899aabbccddeeff00 00:30:23.474 12:16:10 keyring_linux -- keyring/linux.sh@45 -- # trap cleanup EXIT 00:30:23.474 12:16:10 keyring_linux -- keyring/linux.sh@47 -- # prep_key key0 00112233445566778899aabbccddeeff 0 /tmp/:spdk-test:key0 00:30:23.474 12:16:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:23.474 12:16:10 keyring_linux -- keyring/common.sh@17 -- # name=key0 00:30:23.474 12:16:10 keyring_linux -- keyring/common.sh@17 -- # key=00112233445566778899aabbccddeeff 00:30:23.474 12:16:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:23.474 12:16:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key0 00:30:23.474 12:16:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 00112233445566778899aabbccddeeff 0 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 00112233445566778899aabbccddeeff 0 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@704 -- # key=00112233445566778899aabbccddeeff 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:23.474 12:16:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key0 00:30:23.474 12:16:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key0 00:30:23.474 /tmp/:spdk-test:key0 00:30:23.474 12:16:10 keyring_linux -- keyring/linux.sh@48 -- # prep_key key1 112233445566778899aabbccddeeff00 0 /tmp/:spdk-test:key1 00:30:23.474 12:16:10 keyring_linux -- keyring/common.sh@15 -- # local name key digest path 00:30:23.474 12:16:10 keyring_linux -- keyring/common.sh@17 -- # name=key1 00:30:23.474 12:16:10 keyring_linux -- keyring/common.sh@17 -- # key=112233445566778899aabbccddeeff00 00:30:23.474 12:16:10 keyring_linux -- keyring/common.sh@17 -- # digest=0 00:30:23.474 12:16:10 keyring_linux -- keyring/common.sh@18 -- # path=/tmp/:spdk-test:key1 00:30:23.474 12:16:10 keyring_linux -- keyring/common.sh@20 -- # format_interchange_psk 112233445566778899aabbccddeeff00 0 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@715 -- # format_key NVMeTLSkey-1 112233445566778899aabbccddeeff00 0 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@702 -- # local prefix key digest 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@704 -- # prefix=NVMeTLSkey-1 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@704 -- # key=112233445566778899aabbccddeeff00 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@704 -- # digest=0 00:30:23.474 12:16:10 keyring_linux -- nvmf/common.sh@705 -- # python - 00:30:23.474 12:16:10 keyring_linux -- keyring/common.sh@21 -- # chmod 0600 /tmp/:spdk-test:key1 00:30:23.474 12:16:10 keyring_linux -- keyring/common.sh@23 -- # echo /tmp/:spdk-test:key1 00:30:23.474 /tmp/:spdk-test:key1 00:30:23.474 12:16:10 keyring_linux -- keyring/linux.sh@50 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/bin/spdk_tgt 00:30:23.474 12:16:10 keyring_linux -- keyring/linux.sh@51 -- # tgtpid=527174 00:30:23.474 12:16:10 keyring_linux -- keyring/linux.sh@53 -- # waitforlisten 527174 00:30:23.474 12:16:10 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 527174 ']' 00:30:23.474 12:16:10 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:23.474 12:16:10 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:23.474 12:16:10 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:23.474 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:23.474 12:16:10 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:23.474 12:16:10 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:23.474 [2024-07-25 12:16:10.652199] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:30:23.474 [2024-07-25 12:16:10.652262] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527174 ] 00:30:23.474 EAL: No free 2048 kB hugepages reported on node 1 00:30:23.474 [2024-07-25 12:16:10.705271] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:23.734 [2024-07-25 12:16:10.787471] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.303 12:16:11 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:24.303 12:16:11 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:24.303 12:16:11 keyring_linux -- keyring/linux.sh@54 -- # rpc_cmd 00:30:24.303 12:16:11 keyring_linux -- common/autotest_common.sh@559 -- # xtrace_disable 00:30:24.303 12:16:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:24.303 [2024-07-25 12:16:11.471808] tcp.c: 677:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:24.303 null0 00:30:24.303 [2024-07-25 12:16:11.503865] tcp.c: 956:nvmf_tcp_listen: *NOTICE*: TLS support is considered experimental 00:30:24.303 [2024-07-25 12:16:11.504199] tcp.c:1006:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:24.303 12:16:11 keyring_linux -- common/autotest_common.sh@587 -- # [[ 0 == 0 ]] 00:30:24.303 12:16:11 keyring_linux -- keyring/linux.sh@66 -- # keyctl add user :spdk-test:key0 NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: @s 00:30:24.303 119154111 00:30:24.303 12:16:11 keyring_linux -- keyring/linux.sh@67 -- # keyctl add user :spdk-test:key1 NVMeTLSkey-1:00:MTEyMjMzNDQ1NTY2Nzc4ODk5YWFiYmNjZGRlZWZmMDA6CPcs: @s 00:30:24.303 719174657 00:30:24.303 12:16:11 keyring_linux -- keyring/linux.sh@70 -- # bperfpid=527407 00:30:24.303 12:16:11 keyring_linux -- keyring/linux.sh@72 -- # waitforlisten 527407 /var/tmp/bperf.sock 00:30:24.303 12:16:11 keyring_linux -- keyring/linux.sh@68 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/build/examples/bdevperf -q 128 -o 4k -w randread -t 1 -m 2 -r /var/tmp/bperf.sock -z --wait-for-rpc 00:30:24.303 12:16:11 keyring_linux -- common/autotest_common.sh@829 -- # '[' -z 527407 ']' 00:30:24.303 12:16:11 keyring_linux -- common/autotest_common.sh@833 -- # local rpc_addr=/var/tmp/bperf.sock 00:30:24.303 12:16:11 keyring_linux -- common/autotest_common.sh@834 -- # local max_retries=100 00:30:24.303 12:16:11 keyring_linux -- common/autotest_common.sh@836 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock...' 00:30:24.303 Waiting for process to start up and listen on UNIX domain socket /var/tmp/bperf.sock... 00:30:24.303 12:16:11 keyring_linux -- common/autotest_common.sh@838 -- # xtrace_disable 00:30:24.303 12:16:11 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:24.563 [2024-07-25 12:16:11.572425] Starting SPDK v24.09-pre git sha1 58883cba9 / DPDK 24.03.0 initialization... 00:30:24.563 [2024-07-25 12:16:11.572470] [ DPDK EAL parameters: bdevperf --no-shconf -c 2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid527407 ] 00:30:24.563 EAL: No free 2048 kB hugepages reported on node 1 00:30:24.563 [2024-07-25 12:16:11.625507] app.c: 909:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.563 [2024-07-25 12:16:11.705269] reactor.c: 941:reactor_run: *NOTICE*: Reactor started on core 1 00:30:25.501 12:16:12 keyring_linux -- common/autotest_common.sh@858 -- # (( i == 0 )) 00:30:25.501 12:16:12 keyring_linux -- common/autotest_common.sh@862 -- # return 0 00:30:25.501 12:16:12 keyring_linux -- keyring/linux.sh@73 -- # bperf_cmd keyring_linux_set_options --enable 00:30:25.501 12:16:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_linux_set_options --enable 00:30:25.501 12:16:12 keyring_linux -- keyring/linux.sh@74 -- # bperf_cmd framework_start_init 00:30:25.501 12:16:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock framework_start_init 00:30:25.761 12:16:12 keyring_linux -- keyring/linux.sh@75 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:25.762 12:16:12 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key0 00:30:25.762 [2024-07-25 12:16:12.958008] bdev_nvme_rpc.c: 517:rpc_bdev_nvme_attach_controller: *NOTICE*: TLS support is considered experimental 00:30:26.022 nvme0n1 00:30:26.022 12:16:13 keyring_linux -- keyring/linux.sh@77 -- # check_keys 1 :spdk-test:key0 00:30:26.022 12:16:13 keyring_linux -- keyring/linux.sh@19 -- # local count=1 name=:spdk-test:key0 00:30:26.022 12:16:13 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:26.022 12:16:13 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:26.022 12:16:13 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:26.022 12:16:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:26.022 12:16:13 keyring_linux -- keyring/linux.sh@22 -- # (( 1 == count )) 00:30:26.022 12:16:13 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:26.022 12:16:13 keyring_linux -- keyring/linux.sh@25 -- # get_key :spdk-test:key0 00:30:26.022 12:16:13 keyring_linux -- keyring/linux.sh@25 -- # jq -r .sn 00:30:26.022 12:16:13 keyring_linux -- keyring/common.sh@10 -- # bperf_cmd keyring_get_keys 00:30:26.022 12:16:13 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:26.022 12:16:13 keyring_linux -- keyring/common.sh@10 -- # jq '.[] | select(.name == ":spdk-test:key0")' 00:30:26.281 12:16:13 keyring_linux -- keyring/linux.sh@25 -- # sn=119154111 00:30:26.281 12:16:13 keyring_linux -- keyring/linux.sh@26 -- # get_keysn :spdk-test:key0 00:30:26.281 12:16:13 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:26.281 12:16:13 keyring_linux -- keyring/linux.sh@26 -- # [[ 119154111 == \1\1\9\1\5\4\1\1\1 ]] 00:30:26.281 12:16:13 keyring_linux -- keyring/linux.sh@27 -- # keyctl print 119154111 00:30:26.281 12:16:13 keyring_linux -- keyring/linux.sh@27 -- # [[ NVMeTLSkey-1:00:MDAxMTIyMzM0NDU1NjY3Nzg4OTlhYWJiY2NkZGVlZmZwJEiQ: == \N\V\M\e\T\L\S\k\e\y\-\1\:\0\0\:\M\D\A\x\M\T\I\y\M\z\M\0\N\D\U\1\N\j\Y\3\N\z\g\4\O\T\l\h\Y\W\J\i\Y\2\N\k\Z\G\V\l\Z\m\Z\w\J\E\i\Q\: ]] 00:30:26.281 12:16:13 keyring_linux -- keyring/linux.sh@79 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/examples/bdev/bdevperf/bdevperf.py -s /var/tmp/bperf.sock perform_tests 00:30:26.281 Running I/O for 1 seconds... 00:30:27.660 00:30:27.660 Latency(us) 00:30:27.660 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.660 Job: nvme0n1 (Core Mask 0x2, workload: randread, depth: 128, IO size: 4096) 00:30:27.660 nvme0n1 : 1.04 2648.26 10.34 0.00 0.00 47566.10 16412.49 65649.98 00:30:27.660 =================================================================================================================== 00:30:27.660 Total : 2648.26 10.34 0.00 0.00 47566.10 16412.49 65649.98 00:30:27.660 0 00:30:27.660 12:16:14 keyring_linux -- keyring/linux.sh@80 -- # bperf_cmd bdev_nvme_detach_controller nvme0 00:30:27.660 12:16:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_detach_controller nvme0 00:30:27.660 12:16:14 keyring_linux -- keyring/linux.sh@81 -- # check_keys 0 00:30:27.660 12:16:14 keyring_linux -- keyring/linux.sh@19 -- # local count=0 name= 00:30:27.660 12:16:14 keyring_linux -- keyring/linux.sh@20 -- # local sn 00:30:27.660 12:16:14 keyring_linux -- keyring/linux.sh@22 -- # bperf_cmd keyring_get_keys 00:30:27.660 12:16:14 keyring_linux -- keyring/linux.sh@22 -- # jq length 00:30:27.660 12:16:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock keyring_get_keys 00:30:27.920 12:16:14 keyring_linux -- keyring/linux.sh@22 -- # (( 0 == count )) 00:30:27.920 12:16:14 keyring_linux -- keyring/linux.sh@23 -- # (( count == 0 )) 00:30:27.920 12:16:14 keyring_linux -- keyring/linux.sh@23 -- # return 00:30:27.920 12:16:14 keyring_linux -- keyring/linux.sh@84 -- # NOT bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:27.920 12:16:14 keyring_linux -- common/autotest_common.sh@648 -- # local es=0 00:30:27.920 12:16:14 keyring_linux -- common/autotest_common.sh@650 -- # valid_exec_arg bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:27.920 12:16:14 keyring_linux -- common/autotest_common.sh@636 -- # local arg=bperf_cmd 00:30:27.920 12:16:14 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:27.920 12:16:14 keyring_linux -- common/autotest_common.sh@640 -- # type -t bperf_cmd 00:30:27.920 12:16:14 keyring_linux -- common/autotest_common.sh@640 -- # case "$(type -t "$arg")" in 00:30:27.920 12:16:14 keyring_linux -- common/autotest_common.sh@651 -- # bperf_cmd bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:27.920 12:16:14 keyring_linux -- keyring/common.sh@8 -- # /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/rpc.py -s /var/tmp/bperf.sock bdev_nvme_attach_controller -b nvme0 -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2016-06.io.spdk:cnode0 -q nqn.2016-06.io.spdk:host0 --psk :spdk-test:key1 00:30:27.920 [2024-07-25 12:16:15.083678] /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/include/spdk_internal/nvme_tcp.h: 428:nvme_tcp_read_data: *ERROR*: spdk_sock_recv() failed, errno 107: Transport endpoint is not connected 00:30:27.920 [2024-07-25 12:16:15.084304] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaca770 (107): Transport endpoint is not connected 00:30:27.920 [2024-07-25 12:16:15.085300] nvme_tcp.c:2185:nvme_tcp_qpair_process_completions: *ERROR*: Failed to flush tqpair=0xaca770 (9): Bad file descriptor 00:30:27.920 [2024-07-25 12:16:15.086298] nvme_ctrlr.c:4164:nvme_ctrlr_process_init: *ERROR*: [nqn.2016-06.io.spdk:cnode0] Ctrlr is in error state 00:30:27.920 [2024-07-25 12:16:15.086309] nvme.c: 708:nvme_ctrlr_poll_internal: *ERROR*: Failed to initialize SSD: 127.0.0.1 00:30:27.920 [2024-07-25 12:16:15.086316] nvme_ctrlr.c:1106:nvme_ctrlr_fail: *ERROR*: [nqn.2016-06.io.spdk:cnode0] in failed state. 00:30:27.920 request: 00:30:27.920 { 00:30:27.920 "name": "nvme0", 00:30:27.920 "trtype": "tcp", 00:30:27.920 "traddr": "127.0.0.1", 00:30:27.920 "adrfam": "ipv4", 00:30:27.920 "trsvcid": "4420", 00:30:27.920 "subnqn": "nqn.2016-06.io.spdk:cnode0", 00:30:27.920 "hostnqn": "nqn.2016-06.io.spdk:host0", 00:30:27.920 "prchk_reftag": false, 00:30:27.920 "prchk_guard": false, 00:30:27.920 "hdgst": false, 00:30:27.920 "ddgst": false, 00:30:27.920 "psk": ":spdk-test:key1", 00:30:27.920 "method": "bdev_nvme_attach_controller", 00:30:27.920 "req_id": 1 00:30:27.920 } 00:30:27.920 Got JSON-RPC error response 00:30:27.920 response: 00:30:27.920 { 00:30:27.920 "code": -5, 00:30:27.920 "message": "Input/output error" 00:30:27.920 } 00:30:27.920 12:16:15 keyring_linux -- common/autotest_common.sh@651 -- # es=1 00:30:27.920 12:16:15 keyring_linux -- common/autotest_common.sh@659 -- # (( es > 128 )) 00:30:27.920 12:16:15 keyring_linux -- common/autotest_common.sh@670 -- # [[ -n '' ]] 00:30:27.920 12:16:15 keyring_linux -- common/autotest_common.sh@675 -- # (( !es == 0 )) 00:30:27.920 12:16:15 keyring_linux -- keyring/linux.sh@1 -- # cleanup 00:30:27.920 12:16:15 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:27.920 12:16:15 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key0 00:30:27.920 12:16:15 keyring_linux -- keyring/linux.sh@31 -- # local name=key0 sn 00:30:27.920 12:16:15 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key0 00:30:27.920 12:16:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key0 00:30:27.920 12:16:15 keyring_linux -- keyring/linux.sh@33 -- # sn=119154111 00:30:27.920 12:16:15 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 119154111 00:30:27.920 1 links removed 00:30:27.920 12:16:15 keyring_linux -- keyring/linux.sh@38 -- # for key in key0 key1 00:30:27.920 12:16:15 keyring_linux -- keyring/linux.sh@39 -- # unlink_key key1 00:30:27.920 12:16:15 keyring_linux -- keyring/linux.sh@31 -- # local name=key1 sn 00:30:27.920 12:16:15 keyring_linux -- keyring/linux.sh@33 -- # get_keysn :spdk-test:key1 00:30:27.920 12:16:15 keyring_linux -- keyring/linux.sh@16 -- # keyctl search @s user :spdk-test:key1 00:30:27.920 12:16:15 keyring_linux -- keyring/linux.sh@33 -- # sn=719174657 00:30:27.920 12:16:15 keyring_linux -- keyring/linux.sh@34 -- # keyctl unlink 719174657 00:30:27.920 1 links removed 00:30:27.920 12:16:15 keyring_linux -- keyring/linux.sh@41 -- # killprocess 527407 00:30:27.920 12:16:15 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 527407 ']' 00:30:27.920 12:16:15 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 527407 00:30:27.920 12:16:15 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:27.920 12:16:15 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:27.920 12:16:15 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 527407 00:30:27.920 12:16:15 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_1 00:30:27.920 12:16:15 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_1 = sudo ']' 00:30:27.920 12:16:15 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 527407' 00:30:27.920 killing process with pid 527407 00:30:27.920 12:16:15 keyring_linux -- common/autotest_common.sh@967 -- # kill 527407 00:30:27.920 Received shutdown signal, test time was about 1.000000 seconds 00:30:27.920 00:30:27.920 Latency(us) 00:30:27.920 Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:30:27.920 =================================================================================================================== 00:30:27.920 Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:30:27.920 12:16:15 keyring_linux -- common/autotest_common.sh@972 -- # wait 527407 00:30:28.180 12:16:15 keyring_linux -- keyring/linux.sh@42 -- # killprocess 527174 00:30:28.180 12:16:15 keyring_linux -- common/autotest_common.sh@948 -- # '[' -z 527174 ']' 00:30:28.180 12:16:15 keyring_linux -- common/autotest_common.sh@952 -- # kill -0 527174 00:30:28.180 12:16:15 keyring_linux -- common/autotest_common.sh@953 -- # uname 00:30:28.180 12:16:15 keyring_linux -- common/autotest_common.sh@953 -- # '[' Linux = Linux ']' 00:30:28.180 12:16:15 keyring_linux -- common/autotest_common.sh@954 -- # ps --no-headers -o comm= 527174 00:30:28.180 12:16:15 keyring_linux -- common/autotest_common.sh@954 -- # process_name=reactor_0 00:30:28.180 12:16:15 keyring_linux -- common/autotest_common.sh@958 -- # '[' reactor_0 = sudo ']' 00:30:28.180 12:16:15 keyring_linux -- common/autotest_common.sh@966 -- # echo 'killing process with pid 527174' 00:30:28.180 killing process with pid 527174 00:30:28.180 12:16:15 keyring_linux -- common/autotest_common.sh@967 -- # kill 527174 00:30:28.180 12:16:15 keyring_linux -- common/autotest_common.sh@972 -- # wait 527174 00:30:28.750 00:30:28.750 real 0m5.285s 00:30:28.750 user 0m9.288s 00:30:28.750 sys 0m1.194s 00:30:28.750 12:16:15 keyring_linux -- common/autotest_common.sh@1124 -- # xtrace_disable 00:30:28.750 12:16:15 keyring_linux -- common/autotest_common.sh@10 -- # set +x 00:30:28.750 ************************************ 00:30:28.750 END TEST keyring_linux 00:30:28.750 ************************************ 00:30:28.750 12:16:15 -- common/autotest_common.sh@1142 -- # return 0 00:30:28.750 12:16:15 -- spdk/autotest.sh@308 -- # '[' 0 -eq 1 ']' 00:30:28.750 12:16:15 -- spdk/autotest.sh@312 -- # '[' 0 -eq 1 ']' 00:30:28.750 12:16:15 -- spdk/autotest.sh@316 -- # '[' 0 -eq 1 ']' 00:30:28.750 12:16:15 -- spdk/autotest.sh@321 -- # '[' 0 -eq 1 ']' 00:30:28.750 12:16:15 -- spdk/autotest.sh@330 -- # '[' 0 -eq 1 ']' 00:30:28.750 12:16:15 -- spdk/autotest.sh@335 -- # '[' 0 -eq 1 ']' 00:30:28.750 12:16:15 -- spdk/autotest.sh@339 -- # '[' 0 -eq 1 ']' 00:30:28.750 12:16:15 -- spdk/autotest.sh@343 -- # '[' 0 -eq 1 ']' 00:30:28.750 12:16:15 -- spdk/autotest.sh@347 -- # '[' 0 -eq 1 ']' 00:30:28.750 12:16:15 -- spdk/autotest.sh@352 -- # '[' 0 -eq 1 ']' 00:30:28.750 12:16:15 -- spdk/autotest.sh@356 -- # '[' 0 -eq 1 ']' 00:30:28.750 12:16:15 -- spdk/autotest.sh@363 -- # [[ 0 -eq 1 ]] 00:30:28.750 12:16:15 -- spdk/autotest.sh@367 -- # [[ 0 -eq 1 ]] 00:30:28.750 12:16:15 -- spdk/autotest.sh@371 -- # [[ 0 -eq 1 ]] 00:30:28.750 12:16:15 -- spdk/autotest.sh@375 -- # [[ 0 -eq 1 ]] 00:30:28.750 12:16:15 -- spdk/autotest.sh@380 -- # trap - SIGINT SIGTERM EXIT 00:30:28.750 12:16:15 -- spdk/autotest.sh@382 -- # timing_enter post_cleanup 00:30:28.750 12:16:15 -- common/autotest_common.sh@722 -- # xtrace_disable 00:30:28.750 12:16:15 -- common/autotest_common.sh@10 -- # set +x 00:30:28.750 12:16:15 -- spdk/autotest.sh@383 -- # autotest_cleanup 00:30:28.750 12:16:15 -- common/autotest_common.sh@1392 -- # local autotest_es=0 00:30:28.750 12:16:15 -- common/autotest_common.sh@1393 -- # xtrace_disable 00:30:28.750 12:16:15 -- common/autotest_common.sh@10 -- # set +x 00:30:34.031 INFO: APP EXITING 00:30:34.031 INFO: killing all VMs 00:30:34.031 INFO: killing vhost app 00:30:34.031 INFO: EXIT DONE 00:30:35.939 0000:5e:00.0 (8086 0a54): Already using the nvme driver 00:30:35.939 0000:00:04.7 (8086 2021): Already using the ioatdma driver 00:30:35.939 0000:00:04.6 (8086 2021): Already using the ioatdma driver 00:30:35.939 0000:00:04.5 (8086 2021): Already using the ioatdma driver 00:30:35.939 0000:00:04.4 (8086 2021): Already using the ioatdma driver 00:30:35.939 0000:00:04.3 (8086 2021): Already using the ioatdma driver 00:30:35.939 0000:00:04.2 (8086 2021): Already using the ioatdma driver 00:30:35.939 0000:00:04.1 (8086 2021): Already using the ioatdma driver 00:30:35.939 0000:00:04.0 (8086 2021): Already using the ioatdma driver 00:30:35.939 0000:80:04.7 (8086 2021): Already using the ioatdma driver 00:30:35.939 0000:80:04.6 (8086 2021): Already using the ioatdma driver 00:30:35.939 0000:80:04.5 (8086 2021): Already using the ioatdma driver 00:30:35.939 0000:80:04.4 (8086 2021): Already using the ioatdma driver 00:30:35.939 0000:80:04.3 (8086 2021): Already using the ioatdma driver 00:30:35.939 0000:80:04.2 (8086 2021): Already using the ioatdma driver 00:30:35.939 0000:80:04.1 (8086 2021): Already using the ioatdma driver 00:30:36.199 0000:80:04.0 (8086 2021): Already using the ioatdma driver 00:30:38.737 Cleaning 00:30:38.737 Removing: /var/run/dpdk/spdk0/config 00:30:38.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:30:38.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:30:38.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:30:38.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:30:38.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-0 00:30:38.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-1 00:30:38.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-2 00:30:38.737 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-1-3 00:30:38.737 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:30:38.737 Removing: /var/run/dpdk/spdk0/hugepage_info 00:30:38.737 Removing: /var/run/dpdk/spdk1/config 00:30:38.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-0 00:30:38.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-1 00:30:38.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-2 00:30:38.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-0-3 00:30:38.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-0 00:30:38.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-1 00:30:38.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-2 00:30:38.737 Removing: /var/run/dpdk/spdk1/fbarray_memseg-2048k-1-3 00:30:38.737 Removing: /var/run/dpdk/spdk1/fbarray_memzone 00:30:38.737 Removing: /var/run/dpdk/spdk1/hugepage_info 00:30:38.737 Removing: /var/run/dpdk/spdk1/mp_socket 00:30:38.737 Removing: /var/run/dpdk/spdk2/config 00:30:38.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-0 00:30:38.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-1 00:30:38.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-2 00:30:38.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-0-3 00:30:38.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-0 00:30:38.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-1 00:30:38.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-2 00:30:38.737 Removing: /var/run/dpdk/spdk2/fbarray_memseg-2048k-1-3 00:30:38.737 Removing: /var/run/dpdk/spdk2/fbarray_memzone 00:30:38.737 Removing: /var/run/dpdk/spdk2/hugepage_info 00:30:38.737 Removing: /var/run/dpdk/spdk3/config 00:30:38.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-0 00:30:38.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-1 00:30:38.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-2 00:30:38.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-0-3 00:30:38.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-0 00:30:38.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-1 00:30:38.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-2 00:30:38.737 Removing: /var/run/dpdk/spdk3/fbarray_memseg-2048k-1-3 00:30:38.737 Removing: /var/run/dpdk/spdk3/fbarray_memzone 00:30:38.737 Removing: /var/run/dpdk/spdk3/hugepage_info 00:30:38.737 Removing: /var/run/dpdk/spdk4/config 00:30:38.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-0 00:30:38.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-1 00:30:38.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-2 00:30:38.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-0-3 00:30:38.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-0 00:30:38.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-1 00:30:38.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-2 00:30:38.737 Removing: /var/run/dpdk/spdk4/fbarray_memseg-2048k-1-3 00:30:38.737 Removing: /var/run/dpdk/spdk4/fbarray_memzone 00:30:38.737 Removing: /var/run/dpdk/spdk4/hugepage_info 00:30:38.737 Removing: /dev/shm/bdev_svc_trace.1 00:30:38.737 Removing: /dev/shm/nvmf_trace.0 00:30:38.737 Removing: /dev/shm/spdk_tgt_trace.pid142677 00:30:38.737 Removing: /var/run/dpdk/spdk0 00:30:38.737 Removing: /var/run/dpdk/spdk1 00:30:38.737 Removing: /var/run/dpdk/spdk2 00:30:38.737 Removing: /var/run/dpdk/spdk3 00:30:38.997 Removing: /var/run/dpdk/spdk4 00:30:38.997 Removing: /var/run/dpdk/spdk_pid140494 00:30:38.997 Removing: /var/run/dpdk/spdk_pid141607 00:30:38.997 Removing: /var/run/dpdk/spdk_pid142677 00:30:38.997 Removing: /var/run/dpdk/spdk_pid143311 00:30:38.997 Removing: /var/run/dpdk/spdk_pid144263 00:30:38.997 Removing: /var/run/dpdk/spdk_pid144499 00:30:38.997 Removing: /var/run/dpdk/spdk_pid145476 00:30:38.997 Removing: /var/run/dpdk/spdk_pid145703 00:30:38.997 Removing: /var/run/dpdk/spdk_pid145829 00:30:38.997 Removing: /var/run/dpdk/spdk_pid147361 00:30:38.997 Removing: /var/run/dpdk/spdk_pid148694 00:30:38.997 Removing: /var/run/dpdk/spdk_pid149111 00:30:38.997 Removing: /var/run/dpdk/spdk_pid149531 00:30:38.997 Removing: /var/run/dpdk/spdk_pid149833 00:30:38.997 Removing: /var/run/dpdk/spdk_pid150119 00:30:38.997 Removing: /var/run/dpdk/spdk_pid150381 00:30:38.997 Removing: /var/run/dpdk/spdk_pid150633 00:30:38.997 Removing: /var/run/dpdk/spdk_pid151203 00:30:38.997 Removing: /var/run/dpdk/spdk_pid152044 00:30:38.997 Removing: /var/run/dpdk/spdk_pid155026 00:30:38.997 Removing: /var/run/dpdk/spdk_pid155293 00:30:38.997 Removing: /var/run/dpdk/spdk_pid155557 00:30:38.997 Removing: /var/run/dpdk/spdk_pid155782 00:30:38.997 Removing: /var/run/dpdk/spdk_pid156059 00:30:38.997 Removing: /var/run/dpdk/spdk_pid156287 00:30:38.997 Removing: /var/run/dpdk/spdk_pid156763 00:30:38.997 Removing: /var/run/dpdk/spdk_pid156795 00:30:38.997 Removing: /var/run/dpdk/spdk_pid157055 00:30:38.997 Removing: /var/run/dpdk/spdk_pid157288 00:30:38.997 Removing: /var/run/dpdk/spdk_pid157517 00:30:38.997 Removing: /var/run/dpdk/spdk_pid157564 00:30:38.997 Removing: /var/run/dpdk/spdk_pid158109 00:30:38.997 Removing: /var/run/dpdk/spdk_pid158360 00:30:38.997 Removing: /var/run/dpdk/spdk_pid158649 00:30:38.997 Removing: /var/run/dpdk/spdk_pid158921 00:30:38.997 Removing: /var/run/dpdk/spdk_pid158945 00:30:38.997 Removing: /var/run/dpdk/spdk_pid159095 00:30:38.997 Removing: /var/run/dpdk/spdk_pid159354 00:30:38.997 Removing: /var/run/dpdk/spdk_pid159632 00:30:38.997 Removing: /var/run/dpdk/spdk_pid159896 00:30:38.997 Removing: /var/run/dpdk/spdk_pid160161 00:30:38.997 Removing: /var/run/dpdk/spdk_pid160425 00:30:38.997 Removing: /var/run/dpdk/spdk_pid160690 00:30:38.997 Removing: /var/run/dpdk/spdk_pid160949 00:30:38.997 Removing: /var/run/dpdk/spdk_pid161203 00:30:38.997 Removing: /var/run/dpdk/spdk_pid161463 00:30:38.997 Removing: /var/run/dpdk/spdk_pid161713 00:30:38.997 Removing: /var/run/dpdk/spdk_pid161973 00:30:38.997 Removing: /var/run/dpdk/spdk_pid162224 00:30:38.997 Removing: /var/run/dpdk/spdk_pid162475 00:30:38.997 Removing: /var/run/dpdk/spdk_pid162723 00:30:38.997 Removing: /var/run/dpdk/spdk_pid162969 00:30:38.997 Removing: /var/run/dpdk/spdk_pid163242 00:30:38.997 Removing: /var/run/dpdk/spdk_pid163496 00:30:38.997 Removing: /var/run/dpdk/spdk_pid163746 00:30:38.997 Removing: /var/run/dpdk/spdk_pid164001 00:30:38.997 Removing: /var/run/dpdk/spdk_pid164247 00:30:38.997 Removing: /var/run/dpdk/spdk_pid164319 00:30:38.997 Removing: /var/run/dpdk/spdk_pid164624 00:30:38.997 Removing: /var/run/dpdk/spdk_pid168314 00:30:38.997 Removing: /var/run/dpdk/spdk_pid172767 00:30:38.997 Removing: /var/run/dpdk/spdk_pid182837 00:30:38.997 Removing: /var/run/dpdk/spdk_pid183456 00:30:38.997 Removing: /var/run/dpdk/spdk_pid187803 00:30:38.997 Removing: /var/run/dpdk/spdk_pid188053 00:30:38.997 Removing: /var/run/dpdk/spdk_pid192318 00:30:38.997 Removing: /var/run/dpdk/spdk_pid198708 00:30:38.997 Removing: /var/run/dpdk/spdk_pid201310 00:30:38.997 Removing: /var/run/dpdk/spdk_pid211734 00:30:38.997 Removing: /var/run/dpdk/spdk_pid220631 00:30:38.997 Removing: /var/run/dpdk/spdk_pid222458 00:30:38.997 Removing: /var/run/dpdk/spdk_pid223387 00:30:38.997 Removing: /var/run/dpdk/spdk_pid240230 00:30:38.997 Removing: /var/run/dpdk/spdk_pid244189 00:30:38.997 Removing: /var/run/dpdk/spdk_pid287163 00:30:39.258 Removing: /var/run/dpdk/spdk_pid292552 00:30:39.258 Removing: /var/run/dpdk/spdk_pid299279 00:30:39.258 Removing: /var/run/dpdk/spdk_pid305290 00:30:39.258 Removing: /var/run/dpdk/spdk_pid305292 00:30:39.258 Removing: /var/run/dpdk/spdk_pid306202 00:30:39.258 Removing: /var/run/dpdk/spdk_pid307118 00:30:39.258 Removing: /var/run/dpdk/spdk_pid307937 00:30:39.258 Removing: /var/run/dpdk/spdk_pid308503 00:30:39.258 Removing: /var/run/dpdk/spdk_pid308507 00:30:39.258 Removing: /var/run/dpdk/spdk_pid308743 00:30:39.258 Removing: /var/run/dpdk/spdk_pid308927 00:30:39.258 Removing: /var/run/dpdk/spdk_pid308965 00:30:39.258 Removing: /var/run/dpdk/spdk_pid309822 00:30:39.258 Removing: /var/run/dpdk/spdk_pid310600 00:30:39.258 Removing: /var/run/dpdk/spdk_pid311514 00:30:39.258 Removing: /var/run/dpdk/spdk_pid312032 00:30:39.258 Removing: /var/run/dpdk/spdk_pid312199 00:30:39.258 Removing: /var/run/dpdk/spdk_pid312429 00:30:39.258 Removing: /var/run/dpdk/spdk_pid313613 00:30:39.258 Removing: /var/run/dpdk/spdk_pid314652 00:30:39.258 Removing: /var/run/dpdk/spdk_pid322766 00:30:39.258 Removing: /var/run/dpdk/spdk_pid347853 00:30:39.258 Removing: /var/run/dpdk/spdk_pid352296 00:30:39.258 Removing: /var/run/dpdk/spdk_pid353955 00:30:39.258 Removing: /var/run/dpdk/spdk_pid355791 00:30:39.258 Removing: /var/run/dpdk/spdk_pid356028 00:30:39.258 Removing: /var/run/dpdk/spdk_pid356260 00:30:39.258 Removing: /var/run/dpdk/spdk_pid356504 00:30:39.258 Removing: /var/run/dpdk/spdk_pid357016 00:30:39.258 Removing: /var/run/dpdk/spdk_pid358869 00:30:39.258 Removing: /var/run/dpdk/spdk_pid359854 00:30:39.258 Removing: /var/run/dpdk/spdk_pid360429 00:30:39.258 Removing: /var/run/dpdk/spdk_pid362668 00:30:39.258 Removing: /var/run/dpdk/spdk_pid363388 00:30:39.258 Removing: /var/run/dpdk/spdk_pid364115 00:30:39.258 Removing: /var/run/dpdk/spdk_pid368162 00:30:39.258 Removing: /var/run/dpdk/spdk_pid378522 00:30:39.258 Removing: /var/run/dpdk/spdk_pid382443 00:30:39.258 Removing: /var/run/dpdk/spdk_pid388430 00:30:39.258 Removing: /var/run/dpdk/spdk_pid389837 00:30:39.258 Removing: /var/run/dpdk/spdk_pid391270 00:30:39.258 Removing: /var/run/dpdk/spdk_pid395594 00:30:39.258 Removing: /var/run/dpdk/spdk_pid399645 00:30:39.258 Removing: /var/run/dpdk/spdk_pid407125 00:30:39.258 Removing: /var/run/dpdk/spdk_pid407183 00:30:39.258 Removing: /var/run/dpdk/spdk_pid411674 00:30:39.258 Removing: /var/run/dpdk/spdk_pid411902 00:30:39.258 Removing: /var/run/dpdk/spdk_pid412133 00:30:39.258 Removing: /var/run/dpdk/spdk_pid412589 00:30:39.258 Removing: /var/run/dpdk/spdk_pid412600 00:30:39.258 Removing: /var/run/dpdk/spdk_pid417075 00:30:39.258 Removing: /var/run/dpdk/spdk_pid417646 00:30:39.258 Removing: /var/run/dpdk/spdk_pid422097 00:30:39.258 Removing: /var/run/dpdk/spdk_pid425236 00:30:39.258 Removing: /var/run/dpdk/spdk_pid430631 00:30:39.258 Removing: /var/run/dpdk/spdk_pid435817 00:30:39.258 Removing: /var/run/dpdk/spdk_pid444412 00:30:39.258 Removing: /var/run/dpdk/spdk_pid451526 00:30:39.258 Removing: /var/run/dpdk/spdk_pid451528 00:30:39.258 Removing: /var/run/dpdk/spdk_pid469176 00:30:39.258 Removing: /var/run/dpdk/spdk_pid469926 00:30:39.258 Removing: /var/run/dpdk/spdk_pid470841 00:30:39.258 Removing: /var/run/dpdk/spdk_pid471524 00:30:39.258 Removing: /var/run/dpdk/spdk_pid472470 00:30:39.258 Removing: /var/run/dpdk/spdk_pid473164 00:30:39.258 Removing: /var/run/dpdk/spdk_pid473853 00:30:39.258 Removing: /var/run/dpdk/spdk_pid474383 00:30:39.258 Removing: /var/run/dpdk/spdk_pid478596 00:30:39.258 Removing: /var/run/dpdk/spdk_pid478833 00:30:39.258 Removing: /var/run/dpdk/spdk_pid484794 00:30:39.258 Removing: /var/run/dpdk/spdk_pid484941 00:30:39.258 Removing: /var/run/dpdk/spdk_pid487320 00:30:39.258 Removing: /var/run/dpdk/spdk_pid494890 00:30:39.258 Removing: /var/run/dpdk/spdk_pid494900 00:30:39.258 Removing: /var/run/dpdk/spdk_pid499917 00:30:39.258 Removing: /var/run/dpdk/spdk_pid501881 00:30:39.518 Removing: /var/run/dpdk/spdk_pid503843 00:30:39.518 Removing: /var/run/dpdk/spdk_pid504891 00:30:39.518 Removing: /var/run/dpdk/spdk_pid506864 00:30:39.518 Removing: /var/run/dpdk/spdk_pid508030 00:30:39.518 Removing: /var/run/dpdk/spdk_pid517165 00:30:39.518 Removing: /var/run/dpdk/spdk_pid517631 00:30:39.518 Removing: /var/run/dpdk/spdk_pid518268 00:30:39.518 Removing: /var/run/dpdk/spdk_pid520463 00:30:39.518 Removing: /var/run/dpdk/spdk_pid521017 00:30:39.518 Removing: /var/run/dpdk/spdk_pid521488 00:30:39.518 Removing: /var/run/dpdk/spdk_pid525080 00:30:39.518 Removing: /var/run/dpdk/spdk_pid525094 00:30:39.518 Removing: /var/run/dpdk/spdk_pid526633 00:30:39.518 Removing: /var/run/dpdk/spdk_pid527174 00:30:39.518 Removing: /var/run/dpdk/spdk_pid527407 00:30:39.518 Clean 00:30:39.518 12:16:26 -- common/autotest_common.sh@1451 -- # return 0 00:30:39.518 12:16:26 -- spdk/autotest.sh@384 -- # timing_exit post_cleanup 00:30:39.518 12:16:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:39.518 12:16:26 -- common/autotest_common.sh@10 -- # set +x 00:30:39.518 12:16:26 -- spdk/autotest.sh@386 -- # timing_exit autotest 00:30:39.518 12:16:26 -- common/autotest_common.sh@728 -- # xtrace_disable 00:30:39.518 12:16:26 -- common/autotest_common.sh@10 -- # set +x 00:30:39.518 12:16:26 -- spdk/autotest.sh@387 -- # chmod a+r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:30:39.518 12:16:26 -- spdk/autotest.sh@389 -- # [[ -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log ]] 00:30:39.518 12:16:26 -- spdk/autotest.sh@389 -- # rm -f /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/udev.log 00:30:39.518 12:16:26 -- spdk/autotest.sh@391 -- # hash lcov 00:30:39.518 12:16:26 -- spdk/autotest.sh@391 -- # [[ CC_TYPE=gcc == *\c\l\a\n\g* ]] 00:30:39.518 12:16:26 -- spdk/autotest.sh@393 -- # hostname 00:30:39.518 12:16:26 -- spdk/autotest.sh@393 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -c -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk -t spdk-wfp-08 -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info 00:30:39.778 geninfo: WARNING: invalid characters removed from testname! 00:31:01.785 12:16:46 -- spdk/autotest.sh@394 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_base.info -a /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_test.info -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:02.354 12:16:49 -- spdk/autotest.sh@395 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/dpdk/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:04.263 12:16:51 -- spdk/autotest.sh@396 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '/usr/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:06.171 12:16:53 -- spdk/autotest.sh@397 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/examples/vmd/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:08.079 12:16:55 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:09.988 12:16:56 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --no-external -q -r /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/cov_total.info 00:31:11.897 12:16:58 -- spdk/autotest.sh@400 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:31:11.897 12:16:58 -- common/autobuild_common.sh@15 -- $ source /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/common.sh 00:31:11.897 12:16:58 -- scripts/common.sh@508 -- $ [[ -e /bin/wpdk_common.sh ]] 00:31:11.898 12:16:58 -- scripts/common.sh@516 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:31:11.898 12:16:58 -- scripts/common.sh@517 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:31:11.898 12:16:58 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.898 12:16:58 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.898 12:16:58 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.898 12:16:58 -- paths/export.sh@5 -- $ export PATH 00:31:11.898 12:16:58 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/sys_sgci/.local/bin:/home/sys_sgci/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:31:11.898 12:16:58 -- common/autobuild_common.sh@446 -- $ out=/var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output 00:31:11.898 12:16:58 -- common/autobuild_common.sh@447 -- $ date +%s 00:31:11.898 12:16:58 -- common/autobuild_common.sh@447 -- $ mktemp -dt spdk_1721902618.XXXXXX 00:31:11.898 12:16:58 -- common/autobuild_common.sh@447 -- $ SPDK_WORKSPACE=/tmp/spdk_1721902618.0ySyRu 00:31:11.898 12:16:58 -- common/autobuild_common.sh@449 -- $ [[ -n '' ]] 00:31:11.898 12:16:58 -- common/autobuild_common.sh@453 -- $ '[' -n '' ']' 00:31:11.898 12:16:58 -- common/autobuild_common.sh@456 -- $ scanbuild_exclude='--exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/' 00:31:11.898 12:16:58 -- common/autobuild_common.sh@460 -- $ scanbuild_exclude+=' --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp' 00:31:11.898 12:16:58 -- common/autobuild_common.sh@462 -- $ scanbuild='scan-build -o /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/scan-build-tmp --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/dpdk/ --exclude /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/xnvme --exclude /tmp --status-bugs' 00:31:11.898 12:16:58 -- common/autobuild_common.sh@463 -- $ get_config_params 00:31:11.898 12:16:58 -- common/autotest_common.sh@396 -- $ xtrace_disable 00:31:11.898 12:16:58 -- common/autotest_common.sh@10 -- $ set +x 00:31:11.898 12:16:58 -- common/autobuild_common.sh@463 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-coverage --with-ublk --with-vfio-user' 00:31:11.898 12:16:58 -- common/autobuild_common.sh@465 -- $ start_monitor_resources 00:31:11.898 12:16:58 -- pm/common@17 -- $ local monitor 00:31:11.898 12:16:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:11.898 12:16:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:11.898 12:16:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:11.898 12:16:58 -- pm/common@21 -- $ date +%s 00:31:11.898 12:16:58 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:11.898 12:16:58 -- pm/common@21 -- $ date +%s 00:31:11.898 12:16:58 -- pm/common@25 -- $ sleep 1 00:31:11.898 12:16:58 -- pm/common@21 -- $ date +%s 00:31:11.898 12:16:58 -- pm/common@21 -- $ date +%s 00:31:11.898 12:16:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-load -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721902618 00:31:11.898 12:16:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-vmstat -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721902618 00:31:11.898 12:16:58 -- pm/common@21 -- $ sudo -E /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-bmc-pm -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721902618 00:31:11.898 12:16:58 -- pm/common@21 -- $ /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/scripts/perf/pm/collect-cpu-temp -d /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power -l -p monitor.autopackage.sh.1721902618 00:31:11.898 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721902618_collect-vmstat.pm.log 00:31:11.898 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721902618_collect-cpu-load.pm.log 00:31:11.898 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721902618_collect-cpu-temp.pm.log 00:31:11.898 Redirecting to /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/monitor.autopackage.sh.1721902618_collect-bmc-pm.bmc.pm.log 00:31:12.838 12:16:59 -- common/autobuild_common.sh@466 -- $ trap stop_monitor_resources EXIT 00:31:12.838 12:16:59 -- spdk/autopackage.sh@10 -- $ MAKEFLAGS=-j96 00:31:12.838 12:16:59 -- spdk/autopackage.sh@11 -- $ cd /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:12.838 12:16:59 -- spdk/autopackage.sh@13 -- $ [[ 0 -eq 1 ]] 00:31:12.838 12:16:59 -- spdk/autopackage.sh@18 -- $ [[ 0 -eq 0 ]] 00:31:12.838 12:16:59 -- spdk/autopackage.sh@19 -- $ timing_finish 00:31:12.838 12:16:59 -- common/autotest_common.sh@734 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:31:12.838 12:16:59 -- common/autotest_common.sh@735 -- $ '[' -x /usr/local/FlameGraph/flamegraph.pl ']' 00:31:12.838 12:16:59 -- common/autotest_common.sh@737 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/timing.txt 00:31:12.838 12:16:59 -- spdk/autopackage.sh@20 -- $ exit 0 00:31:12.838 12:16:59 -- spdk/autopackage.sh@1 -- $ stop_monitor_resources 00:31:12.838 12:16:59 -- pm/common@29 -- $ signal_monitor_resources TERM 00:31:12.838 12:16:59 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:31:12.838 12:16:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:12.838 12:16:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-load.pid ]] 00:31:12.838 12:16:59 -- pm/common@44 -- $ pid=537318 00:31:12.838 12:16:59 -- pm/common@50 -- $ kill -TERM 537318 00:31:12.838 12:16:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:12.838 12:16:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-vmstat.pid ]] 00:31:12.838 12:16:59 -- pm/common@44 -- $ pid=537320 00:31:12.838 12:16:59 -- pm/common@50 -- $ kill -TERM 537320 00:31:12.838 12:16:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:12.838 12:16:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-cpu-temp.pid ]] 00:31:12.838 12:16:59 -- pm/common@44 -- $ pid=537322 00:31:12.838 12:16:59 -- pm/common@50 -- $ kill -TERM 537322 00:31:12.838 12:16:59 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:31:12.838 12:16:59 -- pm/common@43 -- $ [[ -e /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/collect-bmc-pm.pid ]] 00:31:12.838 12:16:59 -- pm/common@44 -- $ pid=537345 00:31:12.838 12:16:59 -- pm/common@50 -- $ sudo -E kill -TERM 537345 00:31:12.838 + [[ -n 37884 ]] 00:31:12.838 + sudo kill 37884 00:31:12.849 [Pipeline] } 00:31:12.867 [Pipeline] // stage 00:31:12.872 [Pipeline] } 00:31:12.888 [Pipeline] // timeout 00:31:12.893 [Pipeline] } 00:31:12.909 [Pipeline] // catchError 00:31:12.914 [Pipeline] } 00:31:12.930 [Pipeline] // wrap 00:31:12.936 [Pipeline] } 00:31:12.951 [Pipeline] // catchError 00:31:12.960 [Pipeline] stage 00:31:12.962 [Pipeline] { (Epilogue) 00:31:12.977 [Pipeline] catchError 00:31:12.979 [Pipeline] { 00:31:12.992 [Pipeline] echo 00:31:12.994 Cleanup processes 00:31:12.999 [Pipeline] sh 00:31:13.287 + sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:13.287 537432 /usr/bin/ipmitool sdr dump /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk/../output/power/sdr.cache 00:31:13.287 537717 sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:13.300 [Pipeline] sh 00:31:13.584 ++ sudo pgrep -af /var/jenkins/workspace/nvmf-tcp-phy-autotest/spdk 00:31:13.584 ++ grep -v 'sudo pgrep' 00:31:13.584 ++ awk '{print $1}' 00:31:13.584 + sudo kill -9 537432 00:31:13.598 [Pipeline] sh 00:31:13.930 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:31:23.928 [Pipeline] sh 00:31:24.215 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:31:24.215 Artifacts sizes are good 00:31:24.230 [Pipeline] archiveArtifacts 00:31:24.237 Archiving artifacts 00:31:24.392 [Pipeline] sh 00:31:24.678 + sudo chown -R sys_sgci /var/jenkins/workspace/nvmf-tcp-phy-autotest 00:31:24.693 [Pipeline] cleanWs 00:31:24.703 [WS-CLEANUP] Deleting project workspace... 00:31:24.703 [WS-CLEANUP] Deferred wipeout is used... 00:31:24.710 [WS-CLEANUP] done 00:31:24.712 [Pipeline] } 00:31:24.735 [Pipeline] // catchError 00:31:24.748 [Pipeline] sh 00:31:25.032 + logger -p user.info -t JENKINS-CI 00:31:25.044 [Pipeline] } 00:31:25.064 [Pipeline] // stage 00:31:25.071 [Pipeline] } 00:31:25.090 [Pipeline] // node 00:31:25.097 [Pipeline] End of Pipeline 00:31:25.138 Finished: SUCCESS